Planet FreeBSD

April 06, 2021

HardenedBSD

FreeBSD's ports migration to git and its impact on HardenedBSD

FreeBSD completed their ports migration from subversion to git. Prior to the official switch, we used the read-only mirror FreeBSD had at GitHub[1]. The new repo is at [2]. A cursory glance at the new repo will show that the commit hashes changed. This presents an issue with HardenedBSD's ports tree in our merge-based workflow.

I'm going to archive our old ports repo[3] and create a new repo[4]. Due to the nature of our changes and how far back our history goes, creating a new repo is necessary. Attempting to do a `git merge --allow-unrelated-histories` works, but brings gitlab to its knees, eventually failing.

For projects downstream of HardenedBSD using the same kind of merge-based workflow, you will need to effectively do the same thing we did. Here's the process I took:

  1. Match the last commit of the old repo (e7ad26d92beff76bbead7b6b675ad5c551e86fa9) with its corresponding commit in FreeBSD's old ports repo (4010f7bbc03638d71781ce091bf40a0907fa12fe).
  2. Clone FreeBSD's new repo to a temporary location
  3. Generate a diff between the old repo and the new: git diff 4010f7bbc03638d71781ce091bf40a0907fa12fe e7ad26d92beff76bbead7b6b675ad5c551e86fa9
  4. Create the hardenedbsd/main branch
  5. Apply the diff created in step 3
  6. Commit and push

Instead of committing directly to the master branch, we created a new branch "hardenedbsd/main". Upstream's branch in our repo is "freebsd/main".

I apologize for the breakage downstream to us. I made a "best effort" attempt at trying to prevent the breakage, but the ports repo is just too large to do it.

Going forward, please use the new repo at [4]. No merge requests and bug reports for the old repo at [3] will be accepted.

I plan to restore the every-six-hour autosync script this weekend. While there's still dust to settle, I plan to do the syncs myself. The dust should be settled this weekend.

[1]: https://github.com/freebsd/freebsd-ports/
[2]: https://cgit.freebsd.org/ports/
[3]: https://git.hardenedbsd.org/hardenedbsd/hardenedbsd-ports/
[4]: https://git.hardenedbsd.org/hardenedbsd/ports

by Shawn Webb at April 06, 2021 04:28 PM

April 05, 2021

Bobulate

Steam on FreeBSD

Steam is a gaming platform that sells and manages games on Windows and Linux. Since FreeBSD has some pretty good Linux emulation, it is possible – with some footnotes – to run Linux Steam Games on FreeBSD. This was already possible in 2016 but the tooling keeps being updated, so let’s take a look at how things work.

I’m writing about things that other people have built. It’s their efforts in the Open Source world that enable my entertainment, so show some appreciation to the (otherwise largely anonymous) folks who make this possible.

Edit: some comments from shkhln in the FreeBSD discord server have prompted me to update this. Errors remain my own.

nVidia? Nah

While most of the time nVidia graphics cards give excellent results on FreeBSD – with the proprietary drivers – I have not been able to get mine to do anything useful. I’m told that it’s supposed to work and that most success reports come in that way, but personally don’t get any further than some vague X errors and a hang, on the workstation that runs my KDE Plasma desktop the rest of the week (with a GT 730).

Installing Steam Bits

Getting started with Steam on FreeBSD is relatively straightforward (unless you get stuck with something like nVidia-driver-problems, above, or specific-game-problems, below):

  • Install the helper package
    # pkg install linux-steam-utils
    

    This pulls in a whole bunch of Linux userland, and then spits out a bunch of instructions to follow, which are the following steps.

  • Create a dedicated user I made one called steam. The instructions specifically call out a non-wheel user. I used adduser to do the work. (edit) It’s a good idea to put this user into the video and operator groups.
    # adduser
    
  • Load kernel modules There are five modules to load; it is convenient to load them on system start, so we’ll add them to the system configuration.
    # kldload linux linux64 linprocfs linsysfs fdescfs
    # sysrc kld_list+="linux linux64 linprocfs linsysfs fdescfs"
    

    (edit) Much easier is # sysrc linux_enable=yes

  • Mount filesystems The various Linux-compatibility filesystems need to be available. Here are four lines to add to /etc/fstab to make it work:
    linprocfs /compat/linux/proc     linprocfs rw 0 0
    linsysfs  /compat/linux/sys      linsysfs  rw 0 0
    tmpfs     /compat/linux/dev/shm  tmpfs     rw 0 0 
    fdescfs   /dev/fd                fdescfs   rw 0 0
    

    This is automatically available after a system restart, or mount -a to pick up the changes.

  • Make shm writable This is one I have not managed to automate on reboot; it may require some scripting in rc.local. The shared-memory tmpfs needs to be writable for the dedicated user; I run this after boot:
    # chown steam /compat/linux/dev/shm
    

First User Impression

After all that work as root, log in as the dedicated user, start X if that doesn’t happen automatically, open a terminal and run the script to fetch Steam preliminaries, and then run Steam itself:

$ steam-install
$ steam

Presumably you’ll need to go through the login and verification rigamarole; after that you get the same Steam overview window that you would get on a Linux system.

Note that steam as run here is a Ruby script: if any of the pre-requisites are missing, it will complain in an informative way.

Some Games

Only games that run natively on Linux will run on FreeBSD this way; not all of them will, either. Games that use Wine or other technologies are a different story, one I have not tried at all.

I tested four games out of my library, two of which are the things I play right now (on a spare Linux machine). Since my workstation and its nVidia card doesn’t like Steam, I ended up using my Slimbook for this testing. That severely limits what is possible: the iGPU in the i5 10th generation chip – Comet Lake-U GT2 UHD Graphics – is not meant for heavy pixel-pushing. I’ll mention some guessed-at framerates. Again, these framerates apply only to this one machine.

  • Don’t Starve Works out-of-the-box. 5-10 fps on full-screen, not playable for fun.
  • OpenTTD Works out-of-the-box. Playable in fullscreen.
  • Tooth and Tail Needs a workaround. Playable in fullscreen.
  • Unrailed Broken. On startup, message about version GLIBC 2.25 missing.

OpenTTD is an Open Source transport-tycoon game. It’s available from FreeBSD ports as well, so there’s no need to go through Steam for this, but I just wanted to double-check.

There is a compatibility list with tested games and workarounds.

Workarounds can be applied via the Launch Options in a game: right-click on a game from the games list (left-hand panel in Steam) and pick properties, or, on the game page itself, click on the gear-icon that is off to the right of the green play button.

Screenshot showing RMB menu and settings page
Screenshot showing RMB menu and settings page

The screenshot shows what to do: choose properties, then on the general tab of the properties dialog, find the launch options textbox. It looks a lot like a shell-command input box. Fill in something there: the compatibility list suggests various kinds of workarounds that can be tried.

Takeaway

It works pretty well! And, like there is an explosion of gaming-on-Linux content, there’s a similar explosion of gaming-on-FreeBSD enthusiasm. If I can get my workstation to run games at all – swapping around video cards if need be – then that’s one fewer machine I need to keep around for fun.

by adridg at April 05, 2021 10:00 PM

Vermaden

UFS Boot Environments for ARM

Several days ago I introduced UFS Boot Environments that work great on AMD64 (or 64-bit PC if you prefer). I was interested it it will also work on less powerful devices that ZFS is not always the best idea – ARM based devices. After some testing I found out that after one simple modification the UFS Boot Environments work like a charm on ARM devices.

The Table of Contents is as follows.

  • ARM Testing
  • Setup UFS Boot Environments
  • Needed Fix to Make FreeBSD bootme Flags Work
  • Reboot into Other Boot Environment Test

There is not suitable TL;DR here – you will have to read it all or not at all this time 🙂

ARM Testing

I currently do not own 64-bit ARM device … so I thought I will try the qemu(1) emulator and ready to download and use ARM images provided by the FreeBSD project.

First we will install needed packages and fetch the ARM64 (also known as aarch64) images.

host # pkg install -y qemu u-boot-qemu-arm64

host % fetch https://download.freebsd.org/ftp/releases/VM-IMAGES/13.0-RC4/aarch64/Latest/FreeBSD-13.0-RC4-arm64-aarch64.raw.xz

host % xz -d FreeBSD-13.0-RC4-arm64-aarch64.raw.xz

We will now increase the image size to add additional boot environment partition.

host % ls -lh FreeBSD-13.0-RC4-arm64-aarch64.raw 
-rw-r--r-- 1 vermaden vermaden 5.1G 2021-04-04 12:37 FreeBSD-13.0-RC4-arm64-aarch64.raw

host % truncate -s +9G FreeBSD-13.0-RC4-arm64-aarch64.raw

host % ls -lh FreeBSD-13.0-RC4-arm64-aarch64.raw
-rw-r--r-- 1 vermaden vermaden 15G 2021-04-04 12:38 FreeBSD-13.0-RC4-arm64-aarch64.raw

Using qemu(1) emulator we can boot using either UEFI or U-BOOT option. We will test both as some ARM devices use UEFI and some (like Raspberry Pi devices) use U-BOOT mode.

host % export VMDISK=FreeBSD-13.0-RC4-arm64-aarch64.raw

// UEFI
host % qemu-system-aarch64 \
         -m 4096M \
         -cpu cortex-a57 \
         -M virt \
         -bios edk2-aarch64-code.fd \
         -serial telnet::4444,server \
         -nographic \
         -drive if=none,file=${VMDISK},format=raw,id=hd0 \
         -device virtio-blk-device,drive=hd0 \
         -device virtio-net-device,netdev=net0 \
         -netdev user,id=net0

// U-BOOT
host % qemu-system-aarch64 \
         -m 4096M \
         -cpu cortex-a57 \
         -M virt \
         -bios /usr/local/share/u-boot/u-boot-qemu-arm64/u-boot.bin \
         -serial telnet::4444,server \
         -nographic \
         -drive if=none,file=${VMDISK},format=raw,id=hd0 \
         -device virtio-blk-device,drive=hd0 \
         -device virtio-net-device,netdev=net0 \
         -netdev user,id=net0

After starting the qemu(1) process it will display the following information.

(...)
QEMU 5.0.1 monitor - type 'help' for more information
(qemu) qemu-system-aarch64: -serial telnet::4444,server: info: QEMU waiting for connection on: disconnected:telnet::::4444,server

We can now use telnet(1) to connect to our serial console on emulated ARM64 system. We will add additional freebsd-ufs partition for our second boot environment.

host % telnet localhost 4444
(...)

login: root

ARM # pkg install -y lsblk

ARM # lsblk
DEVICE         MAJ:MIN SIZE TYPE                              LABEL MOUNT
vtbd0            0:62   14G GPT                                   - -
  vtbd0p1        0:63   33M efi                          gpt/efiesp /boot/efi
  vtbd0p2        0:64  1.0G freebsd-swap                 gpt/swapfs -
  vtbd0p3        0:65  4.0G freebsd-ufs                  ufs/rootfs /

ARM # gpart show
=>       3  10552344  vtbd0  GPT  (14G) [CORRUPT]
         3     66584      1  efi  (33M)
     66587   2097152      2  freebsd-swap  (1.0G)
   2163739   8388608      3  freebsd-ufs  (4.0G)

ARM # gpart recover vtbd0
vtbd0 recovered

ARM # gpart add -s 4G -t freebsd-ufs vtbd0
vtbd0p4 added

ARM # gpart show
=>       3  29426709  vtbd0  GPT  (14G)
         3     66584      1  efi  (33M)
     66587   2097152      2  freebsd-swap  (1.0G)
   2163739   8388608      3  freebsd-ufs  (4.0G)
  10552347   8388608      4  freebsd-ufs  (4.0G)
  18940955  10485757         - free -  (5.0G)

We will now make some manual preparations for ufsbe.sh to work.

For example the FreeBSD images come with GPT labels used in /etc/fstab file which are currently not supported by UFS Boot Environments so we will modify the /etc/fstab file to mount root filesystem from raw devices and partitions.

ARM # mkdir -p /ufsbe/3 /ufsbe/4

ARM # cat /etc/fstab
# Custom /etc/fstab for FreeBSD VM images
/dev/gpt/rootfs  /          ufs      rw  1 1
/dev/gpt/efiesp  /boot/efi  msdosfs  rw  2 2
/dev/gpt/swapfs  none       swap     sw  0 0

ARM # vi /etc/fstab

ARM # cat /etc/fstab
# Custom /etc/fstab for FreeBSD VM images
/dev/vtbd0p3     /          ufs      rw  1 1
/dev/vtbd0p4     /ufsbe/4   ufs      rw  1 1
/dev/gpt/efiesp  /boot/efi  msdosfs  rw  2 2
/dev/gpt/swapfs  none       swap     sw  0 0

ARM # newfs /dev/vtbd0p4
/dev/vtbd0p4: 4096.0MB (8388608 sectors) block size 32768, fragment size 4096
        using 7 cylinder groups of 625.22MB, 20007 blks, 80128 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 1280640, 2561088, 3841536, 5121984, 6402432, 7682880

We now have second boot environment ready and /etc/fstab file modified to boot from raw devices instead of GPT labels. We will now reboot(8) to make these changes apply.

ARM # reboot

Setup UFS Boot Environments

We will not fetch the ufsbe.sh command and finish the setup process.

ARM # fetch https://raw.githubusercontent.com/vermaden/ufsbe/main/ufsbe.sh

ARM # chmod +x ./ufsbe.sh

ARM # ./ufsbe.sh

NOPE: did not found boot environment setup with 'ufsbe' label

INFO: setup each boot environment partition with appropriate label

HELP: list all 'freebsd-ufs' partitions type:

  # gpart show -p | grep freebsd-ufs
      2098216   33554432  ada0p3  freebsd-ufs  [bootme]  (16G)
     35652648   33554432  ada0p4  freebsd-ufs  (16G)
     69207080   33554432  ada0p5  freebsd-ufs  (16G)

HELP: to setup partitions 3/4/5 as boot environments type:

  # gpart modify -i 3 -l ufsbe/3 ada0
  # gpart modify -i 4 -l ufsbe/4 ada0
  # gpart modify -i 5 -l ufsbe/5 ada0

ARM # gpart show
=>       3  29426709  vtbd0  GPT  (14G)
         3     66584      1  efi  (33M)
     66587   2097152      2  freebsd-swap  (1.0G)
   2163739   8388608      3  freebsd-ufs  (4.0G)
  10552347   8388608      4  freebsd-ufs  (4.0G)
  18940955  10485757         - free -  (5.0G)

ARM # gpart modify -i 3 -l ufsbe/3 vtbd0
vtbd0p3 modified

ARM # gpart modify -i 4 -l ufsbe/4 vtbd0
vtbd0p4 modified

ARM # ./ufsbe.sh 
INFO: flag 'bootme' successfully set on / filesystem
usage:
  ufsbe.sh (l)ist
  ufsbe.sh (a)ctivate
  ufsbe.sh (s)ync

The UFS Boot Environments are now properly deployed on this ARM64 test system.

Needed Fix to Make FreeBSD bootme Flags Work

At the first try I was not able to use UFS Boot Environments as the bootme flag was ignored.

I then submitted a FreeBSD bug – 254764 – GPT ‘bootme’ flag is not respected on AARCH64 – to make sure I am doing everything well on my side. As it turns out the bootme flag is a FreeBSD specific extension and nobody else uses it. The needed fix is to copy /boot/gptboot.efi in place of bootaa64.efi file.

Lets now make that fix.

ARM # cp /boot/gptboot.efi /boot/efi/EFI/BOOT/bootaa64.efi

Reboot into Other Boot Environment Test

We will now synchronize boot environments 3 and 4 and then reboot into the 4 boot environments.

ARM # ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
vtbd0p3  ufsbe/3      NR  
vtbd0p4  ufsbe/4      -  

ARM # ./ufsbe.sh sync 3 4
INFO: syncing '3' (source) => '4' (target) boot environments ...
INFO: boot environments '3' (source) => '4' (target) synced

ARM # ./ufsbe.sh activate 4
INFO: boot environment '4' now activated

ARM # reboot

After the reboot the currently active boot environment is 4. It means that UFS Boot Environments work properly on ARM devices.

ARM # ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
vtbd0p3  ufsbe/3      -   
vtbd0p4  ufsbe/4      NR  

ARM # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/vtbd0p4       3.9G    2.6G    935M    74%    /
devfs              1.0K    1.0K      0B   100%    /dev
/dev/vtbd0p3       3.9G    2.6G    932M    74%    /ufsbe/3
/dev/gpt/efiesp     32M    1.3M     31M     4%    /boot/efi

I have tested both U-BOOT and UEFI boot modes and they both allow to use UFS Boot Environments.

EOF

by vermaden at April 05, 2021 08:40 PM

April 04, 2021

Bobulate

Some Calamares Packaging

Calamares is a distro-, desktop- and toolkit-independent installer for Linux systems. It it intended to be the thing that gets your CD installed onto the hard drive of the target system. The Dutch term voortschrijdend inzicht is applicable here, too, as CD’s have been replaced by ISO images or USB sticks and hard drives are now (virtualised) SSDs instead. Today, though, I’m going to look at the packaging of Calamares – what distro’s do to get a Calamares executable that can be put on that CD.

In this post I’m going to pick apart a PKGBUILD script – that’s for Arch and derivatives. It is not intended to pick on that particular script: the script is there to get the effect that the distro wants, and if it solves their problem, it is Good Enough™.

Packaging software is an activity that is as old as Linux distro’s. Older, even, but the idea is that it is the distro that picks some sources, selects options and compile flags and bundles things up really took off with the “Cambrian explosion” of packaging formats. On the FreeBSD front, there is the ports tree which is 40000 Makefiles, and Arch has PKGBUILD, among others in the Arch Linux user repository. Other distro’s have similar collections of build instructions.

The thinking in packaging is swinging towards developer-led packaging in AppImage or SnapCraft or FlatPak. I’m still not sure what I think of that in general, but for Calamares it really doesn’t make sense: Calamares is not an end-user application, it’s intended to be used once in a pretty specific situation and then discarded.

So Calamares is pretty traditional, shipping a source tarball, and you’re expected to run cmake; make; make install to to the things. As time goes by, voortschrijdend inzicht applies (the realisation that things could be better) and conveniences are added to the CMake files, and knobs added or tuned.

I realised that I did not advertise that kind of improvements in the release notes for Calamares, when I looked at a PKGBUILD file for it.

Getting Version Information

This particular PKGBUILD is intended to consume Calamares git-nightly, so it starts from a git checkout. For the safe and correct labeling of the package, it wants a full version, not just the major.minor.patch version.

pkgver() {
	cd ${srcdir}/calamares
	_ver="$(cat CMakeLists.txt | grep -m3 -e "  VERSION" | grep -o "[[:digit:]]*" | xargs | sed s'/ /./g')"
	_git=".r$(git rev-list --count HEAD).$(git rev-parse --short HEAD)"
	printf '%s%s' "${_ver}" "${_git}"
	sed -i -e "s|\${CALAMARES_VERSION_MAJOR}.\${CALAMARES_VERSION_MINOR}.\${CALAMARES_VERSION_PATCH}|${_ver}${_git}|g" CMakeLists.txt
	sed -i -e "s|CALAMARES_VERSION_RC 1|CALAMARES_VERSION_RC 0|g" CMakeLists.txt
}

This extracts the M.m.p version number from CMakeLists.txt (based on the version being listed in the project() call in a specific way), appends a git hash to it, then goes back and edits that version in to the CMakeLists.txt. While there, set RC to 0 (so behave like a release).

Calamares itself has had this information for a long time: there is a CalamaresVersion.h and a CalamaresVersionX.h with the short-form and the long-form (including git revision and date) version information. There was even a special-case (make) target that does nothing but print out the version!

So here’s a clear case of communications failure: I didn’t advertise the availability of the version information, and the distro did not clearly ask for it upstream. So we ended up building similar solutions independently, rather than doing it once-and-for-all.

Changes Made

As I mentioned in my post about CMake script mode, I can use CMake itself to print versioning information for the distro. Once I had done that, I realised that in doing so, I broke their build!

The PKGBUILD depends on a particular structure of the CMakeLists.txt file. I removed that structure, so it’s now broken. Even though I built a convenient means to get the same information, and so “fixed” the underlying problem (getting versioning information), it’s still no good for this particular distro.

The change in question has been deferred to Calamares 3.3, which is full of breaking changes (dependencies, requirements, configuration changes) and is going to need a big “heads up” to distro’s anyway.

Things that I have done in the current Calamares release series, though, include:

  • dropping useless information from the version (“rc1” to show a version is a development version; that’s also obvious from the date and git hash in the version),
  • providing a slightly more compact version string,
  • making calamares --version print the long version string,
  • logging the long version string on startup.

It’s also clear that I need to start advertising “this may have an effect on packaging” in release notes. So after Easter there’s still things to do.

by adridg at April 04, 2021 10:00 PM

April 01, 2021

Vermaden

UFS Boot Environments

Yes you read it correctly. The fabulous ZFS Boot Environments – more about them here – https://is.gd/BECTL – if you are not familiar with this concept – are now also possible on UFS filesystems on FreeBSD. Of course in little different form and without using snapshots and clones but the idea and solution remains. You can now have bootable backups of your system before major changes and/or upgrades. This solution does not use UFS snapshots. All bootable UFS variants are supported with and without Soft Updates or Soft Updates Journaling. The idea behind UFS Boot Environments lays in several additional root (/) partitions that will be used as alternate boot environments.

Concept is similar to Solaris Live Upgrade mechanism which used lucreate/luupgrade/lustatus commands and also to AIX Alternate Disk Cloning and Install with alt_disk_copy/alt_disk_install commands.

In this article I will show you how to setup new FreeBSD system with 3 of such partitions. In my honest opinion its more then enough for most purposes. On my desktop/workstation I have more then 1000 packages installed. With FreeBSD Base System it takes about 11 GB of space with ZFS compression and 15 GB without it. Thus I propose 16 GB partitions. Your needs may of course be different. You may as well create 4 GB or 64 GB partitions.

The UFS Boot Environments would not exist without the inspiration from FreeBSD Upgrade Procedure Using GPT blog post by Mariusz Zaborski (also known as oshogbo) who describes the concept of bootme flags for GPT partitions. That is the heart of this solution. By selecting activate for boot environment the bootme flag is removed from all existing boot environments and set for the new desired one. The ufsbe(8) tool was tested on FreeBSD 12.x and 13.x currently.

FreeBSD Install for UFS Boot Environments

Generally only GPT partitioning is needed to use UFS Boot Environments. Below I will show example install process with 3 root partitions of 16 GB each.

In the FreeBSD Installer select Install.

The select Auto (UFS) option.

Then use Entire Disk option.

Then select GPT partition table.

The FreeBSD Installer will propose the following solution.

Change it into 3 partitions 16 GB each to make it look like that one below and hit Finish.

Then Commit your choice.

… and then the install process will continue as usual.

Besides these option you may select whatever you choose in the install process.

After the system reboots its gpart(8) will look like that one below.

root@fbsd13:~ # gpart show
=>       40  134217648  ada0  GPT  (64G)
         40       1024     1  freebsd-boot  (512K)
       1064    2097152     2  freebsd-swap  (1.0G)
    2098216   33554432     3  freebsd-ufs  (16G)
   35652648   33554432     4  freebsd-ufs  (16G)
   69207080   33554432     5  freebsd-ufs  (16G)
  102761512   31456176        - free -  (15G)

Now fetch(1) the ufsbe.sh script from its GitHub page.

# fetch https://raw.githubusercontent.com/vermaden/ufsbe/main/ufsbe.sh
# chmod +x ./ufsbe.sh
# ./ufsbe.sh

NOPE: did not found boot environment setup with 'ufsbe' label

INFO: setup each boot environment partition with appropriate label

HELP: list all 'freebsd-ufs' partitions type:

  # gpart show -p | grep freebsd-ufs
      2098216   33554432  ada0p3  freebsd-ufs  [bootme]  (16G)
     35652648   33554432  ada0p4  freebsd-ufs  (16G)
     69207080   33554432  ada0p5  freebsd-ufs  (16G)

HELP: to setup partitions 3/4/5 as boot environments type:

  # gpart modify -i 3 -l ufsbe/3 ada0
  # gpart modify -i 4 -l ufsbe/4 ada0
  # gpart modify -i 5 -l ufsbe/5 ada0

It will welcome you with information about needed setup steps.

We will now make these steps marking all boot environment partitions with appropriate ufsbe labels.

# gpart modify -i 3 -l ufsbe/3 ada0
ada0p3 modified
# gpart modify -i 4 -l ufsbe/4 ada0
ada0p4 modified
# gpart modify -i 5 -l ufsbe/5 ada0
ada0p5 modified

Now ufsbe.sh will setup bootme flag for currently used root (/) partition.

# ./ufsbe.sh
INFO: flag 'bootme' successfully set on / filesystem
usage:
  ufsbe.sh list
  ufsbe.sh activate
  ufsbe.sh sync  

Setup is complete.

All three root partitions have the ufsbe label. To make it more simple the /dev/ada0p3 device gets the ufsbe/3 label and /dev/ada0p4 device gets the ufsbe/4 … you see the pattern.

# gpart show -p -l
=>       40  134217648    ada0  GPT  (64G)
         40       1024  ada0p1  (null)  (512K)
       1064    2097152  ada0p2  swap  (1.0G)
    2098216   33554432  ada0p3  ufsbe/3  [bootme]  (16G)
   35652648   33554432  ada0p4  ufsbe/4  (16G)
   69207080   33554432  ada0p5  ufsbe/5  (16G)
  102761512   31456176          - free -  (15G)

You can now use our UFS Boot Environments on this system.

Using UFS Boot Environments

Lets list our boot environments with list command. The short ‘l‘ option also works.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      NR  
ada0p4   ufsbe/4      -   
ada0p5   ufsbe/5      -  

Its output is similar to mine ZFS Boot Environments tools beadm(8). The N flag shows that this is the boot environments we are using NOW. The R flag shows which one we will use after the reboot(8).

Currently only the 3 boot environments is populated (by FreeBSD Installer that is). The 4 and 5 boot environments are empty filesystems.

You can either extract your own FreeBSD version there with base.txz and kernel.txz or use the sync option of ufsbe.sh which will use rsync(1) for the process. Below is an example of syncing boot environment 3 (the one we installed) with currently empty boot environment 4.

# ./ufsbe.sh sync 3 4
NOPE: rsync(1) is not available in ${PATH}
INFO: install 'net/rsync' package or port

# pkg install net/rsync

# ./ufsbe.sh sync 3 4
INFO: syncing '3' (source) => '4' (target) boot environments ...
INFO: boot environments '3' (source) => '4' (target) synced

You can now see that boot environment 3 and 4 have same size.

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ada0p3     15G    1.3G     13G     9%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/ada0p4     15G    1.2G     13G     9%    /ufsbe/4
/dev/ada0p5     15G     32M     14G     0%    /ufsbe/5

If we would like to activate an empty boot environment 5 then ufsbe.sh will not let us do that because that will make our system unbootable. Of course is quote fast/naive check but at least makes sure some files exists on the soon to be active boot environment. Currently these files are checked but this list may be increased in the future:

  • /boot/kernel/kernel
  • /boot/loader.conf
  • /etc/rc.conf
  • /rescue/ls
  • /bin/ls
  • /sbin/fsck
  • /usr/bin/su
  • /usr/sbin/chroot
  • /lib/libc.so.*
  • /usr/lib/libpam.so.*

Below this ‘protection’ in action.

# ./ufsbe.sh activate 5
NOPE: boot environment '5' is not complete
INFO: critical file '/ufsbe/5/boot/kernel/kernel' is missing
INFO: use 'sync' option or copy file manually

The boot environment 4 activation process works as desired as we populated it with files from boot environment 3 first.

# ./ufsbe.sh activate 4
INFO: boot environment '4' now activated

Same as with beadm(8) the ufsbe.sh also checks if boot environment is already activated.

# ./ufsbe.sh activate 4
INFO: boot environment '4' is already active

The list of our boot environments looks like that now.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      N   
ada0p4   ufsbe/4      R   
ada0p5   ufsbe/5      -   

… and that is how output of gpart(8) looks like.

# gpart show -p -l
=>       40  134217648    ada0  GPT  (64G)
         40       1024  ada0p1  (null)  (512K)
       1064    2097152  ada0p2  swap  (1.0G)
    2098216   33554432  ada0p3  ufsbe/3  (16G)
   35652648   33554432  ada0p4  ufsbe/4  [bootme]  (16G)
   69207080   33554432  ada0p5  ufsbe/5  (16G)
  102761512   31456176          - free -  (15G)

We will now reboot into the activated boot environment 4.

# shutdown -r now

After the reboot(8) we see that we are now booted from the 4 boot environment.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      -   
ada0p4   ufsbe/4      NR  
ada0p5   ufsbe/5      -   

Closing Notes

Keep in mind that this is only first 0.1 version of ufsbe.sh. Do not use it in production or important systems and make sure you have restorable backups. Like with beadm(8) in the past I plan to improve it with more useful options and also add it to the Ports tree in the future.

Feel free to share your thoughts about this tool.

I must wait till midnight to make it shown as posted on 2nd of April because if I would post it on 1st of April it would be taken as April Fool Joke which is definitely not.

Enjoy.

Updating or Upgrading

You may use the Upgrade FreeBSD with ZFS Boot Environments method with these UFS Boot Environments as well but now you will chroot(8) into /ufsbe/4 for example.

EOF

by vermaden at April 01, 2021 10:46 PM

March 31, 2021

HardenedBSD

HardenedBSD March 2021 Status Report

This month, I worked on finding and fixing the regression that caused kernel panics on our package builders. I think I found the issue: I made it so that the HARDENEDBSD amd64 kernel just included GENERIC so that we follow FreeBSD's toggling of features. Doing so added QUEUE_MACRO_DEBUG_TRASH to our kernel config. That option is the likely culprit. If the next package build (with the option removed) completes, I will commit the change that removes QUEUE_MACRO_DEBUG_TRASH from the HARDENEDBSD amd64 kernel.

I still have one new server to set up. I plan to use it for our 12-STABLE builds. I enabled the 14-CURRENT/arm64 nightly builds and we've now completed two production package builds.

I'm giving a virtual presentation on 07 Apr 2021 I'm giving titled "HardenedBSD 2021 State of the Hardened Union." It details the work we've been doing since the last HardenedBSD State of the Union.

As part of that presentation, I'd like to highlight areas in which HardenedBSD is used. If you or your employer uses HardenedBSD and would like me to add a slide about it, please reach out to me.

In April, I plan to focus on the ports tree. I'm going to audit all the ports that fail to build and determine if I can easily get them to build. A large number of ports ignore our setting -fPIC and -fPIE compiler flags and subsequently fail to build.

Jason Donenfeld of the Wireguard project is looking for a maintainer/developer for the Wireguard FreeBSD kernel module. If you are familiar with the networking kernel code and would like to help, please reach out to me. I'll get you in touch with Jason. I'm hoping that the HardenedBSD community can fill a gap where the FreeBSD community failed: developing a robust in-kernel Wireguard implementation properly blessed by the Wireguard project. I would be happy to dedicate some HardenedBSD infrastructure resources to help support this effort. Those resources include, but are not limited to: a repo on our self-hosted git server and a VM for nightly builds.

by Shawn Webb at March 31, 2021 11:44 PM

March 22, 2021

Tubsta

freebsd-update 13.0 caveats

By: Jason Tubnor @tubsta

We have been performing some extensive and edge case updates from FreeBSD versions earlier than 13.0. While in most cases, there have been no issues and most users will have a smooth upgrade process when we finally have 13.0-RELEASE hit but there are some issues that we hit which will need a bit of clarification and some advice on how to work them.

There has been significant changes in how ifconfig(8) interacts with VLAN(4). While most users have hosts on PVID or untagged switch ports, those that have fully tagged hosts will have issues during the upgrade with the host not being available on the network after the first reboot. VLAN interfaces will not be available until the new version of ifconfig(8) is put in place during the second ‘freebsd-update install’ run.

To get around this issue, users will either need IPMI/LOM access to their hosts to execute the second:

freebsd-update install && shutdown -r now

at which point the host will reboot and again have access to the network because of the newer ifconfig(8) being able to configure VLANs correctly. Another method is to apply the above as a cron(8) entry and have is execute a couple of minutes after the host has been rebooted. For this, you will need to know how long it takes for your host to reboot and allow a couple of extra minutes for the system to settle down post boot.

The other issue is for those that have started using the zpool checkpoint feature. If you are using legacy boot (BIOS boot), you will have issues with the system on boot after you trigger the first reboot if you have an existing 12.2 checkpoint on the system:

zio boot issue

Please note: During testing, it was found that systems that UEFI boot do not suffer this issue and users that use UEFI can use a checkpoint prior to their freebsd-update.

The following error (to assist search engines):

zio_read error: 5
zio_read error: 5
zio_read error: 5
ZFS: i/o error - all block copies unavailable
ZFS: can not read checkpoint data.

Indicates that there are some checkpoint compatibility issues in the gptzfsboot for legacy booted systems. While this could be looked into by developers, I don’t think there will be many that hit this issue and really should be just documented in the release notes. If this happens to you, simply boot the installation media, drop to the shell prompt and issue:

zpool import poolname
zpool checkpoint --discard poolname
reboot

When the system reboots, the boot loader won’t try to read-in the checkpoint status and continue booting your half upgraded system cleanly. Further testing has shown that there is no on-going issues with checkpoints once you have successfully updated your system.

by Tubsta at March 22, 2021 03:22 AM

March 15, 2021

Tubsta

FreeBSD 13.0 – Full Desktop Experience

By: Jason Tubnor @tubsta

With the release of FreeBSD 13.0 on the horizon, I wanted to see how it shapes up on my Lenovo T450 laptop.  Previous major releases on this laptop, using it as a workstation, felt very rough around the edges but with 13, it feels like the developers got it right.

I like to keep things simple when it comes to a desktop operating system so the description below is how I went from a fresh install of FreeBSD 13.0RC1 to a working environment that is based on using the XFCE4 desktop experience.

The FreeBSD install process is simple and well documented in other official locations, so I am not going to repeat that here.  However, some of the configuration items that I did select was to use ZFS on Root, encrypted swap and disabled all services (this is a workstation, not a server).

Once the machine had been rebooted, we need to set it up so that suspend/resume works correctly (and tests as such) and enable power management.  The main issue that people have getting the resume part of the suspend/resume to work is not having the drm or xf86 drivers loaded that are applicable to the onboard graphics.

For the T450 here, we have a standard Intel graphics chipset.  Install the following binary packages for the i915 drivers, enable them and the power management services, then reboot your machine for testing:

pkg bootstrap -f
pkg install -y drm-fbsd-kmod xf86-video-intel
kld_list=”i915kms” >> /etc/rc.conf
sysrc powerd_enable=YES
shutdown -r now

Once the machine has rebooted, you can test to see if your laptop can go into a suspend state and then resume without issue:

acpiconf -s 3

This will send the laptop into the S3 suspend state. Wait 30 seconds and then briefly press the power button. If all is working correctly, your laptop should come back to life including the screen.  This has been ‘hit and miss’ on the T450 in previous versions but seems to be working ok on 13.0.

Just a note, if you do experience some issues here, make sure your bios/firmware has been updated to the latest release.

If the above worked ok, then you can set the sysctl parameter to suspend on lid closure:

sysctl hw.acpi.lid_switch_state=S3

Also set it in the sysctl.conf file so it is set correctly each boot:

echo ‘hw.acpi.lid_switch_state=S3’ >> /etc/sysctl.conf

The final thing to do is load up xorg, XFCE4 and a few of our favourite apps to get us going.

pkg install -y xorg xfce xfce-goodies xscreensaver \
slim-freebsd-black-theme openntpd amigafonts mc \
oksh otter-browser cool-retro-term bluefish

Once all the packages have been installed, enable dbus, slim and openntpd then start the services (except slim, reboot when you are ready to start XFCE4):

sysrc dbus_enable=YES && service dbus start
sysrc openntpd_enable=YES && service openntpd start
sysrc slim_enable=YES

At some point, I’ll change to the OpenBSD ksh shell (oksh in FreeBSD packages) so I’ll add an entry into the ~/.profile file to read in ~/.kshrc

ENV=$HOME/.kshrc; export ENV

And the skeleton of my ~/.kshrc file will look something like the following:

HISTFILE=”$HOME/.ksh_history”
HISTSIZE=5000
export VISUAL=”emacs”
export EDITOR=”vi”
set -o emacs

Log out and log back in to ingest the above environment variables (or wait for the reboot below).

The final part is to get a vanilla XFCE4 desktop setup and enable/disable some of the default settings so the desktop works efficiently with the suspend/resume function and change the screensaver/lock screen.

Setup the slim display manager.  Edit the /usr/local/etc/slim.conf and change the default theme to slim-freebsd-black-theme:

current_theme slim-freebsd-black-theme

Set XFCE4 to auto-start after login, create the ~/.xinitrc file and then insert the lines:

ENV=$HOME/.kshrc; export ENV
exec startxfce4

Restart the laptop for the DM to take affect (this will also allow reboot and shutdown directly from within XFCE4 for users):

shutdown -r now

Login and once at the XFCE4 desktop, go to Applications -> Settings -> Settings Manager

In settings, scroll down to System and select ‘Session and Startup’

Select ‘Application Autostart’, de-select XFCE Screensaver, select both Screensaver and AP-SPI D-Bus Bus.

Once the above is done, log out of XFCE4 and log back in, then you can configure the xscreensaver program as well as being able to suspend and resume your laptop by closing and opening the lid at any point in the use of the laptop.

SLiM desktop manager
XFCE4 desktop with web browser
XFCE4 Desktop

by Tubsta at March 15, 2021 10:21 AM

March 04, 2021

Colin Percival

107 Lightbulbs

I bought a house last summer, and after moving in on October 1st, one of my first priorities was to replace all of the old (mostly incandescent) light bulbs with efficient LED bulbs. This turned into a five month saga.

March 04, 2021 03:20 AM

January 31, 2021

Warner Losh

EPSON QX-10 20MB Hard disk

 EPSON QX-10 20MB Hard Disk

I've been looking for some DEC Rainbow 3rd party hard drives of late. QCS (Quality Computer Services) made an external hard disk for the DEC Rainbow. There's advertisements in Digital Review and other trade magazines at the time. It uses a SASI interface, and likely had a DEC Rainbow specific add-in card that they rewarmed from other designs...

One recently came up for sale on E-Bay. I thought I'd buy it to check it out. There was no interface card with it, alas. But it was a box with a WD1006 SASI to MFM controller in it that could handle two different drives. The drives were LUN0 and LUN1.

SASI, for those that don't know, pre-dates SCSI-1. It's kinda sorta SCSI-1 compatible, if you turn off parity, don't allow the drive to signal attention and restrict yourself to a subset of commands. It also doesn't have INQUIRY so you kinda have to know the size of the drive before hand. Most SASI controller drivers of the day wrote a label to drive with this information since it was always possible to read LBA0 w/o knowing anything else about the drive. Some controllers had ways to at least return a size, though that varied a lot...

Since SASI is kinda hard to interface to modern SCSI controllers, I used a MFM reader board I got from David Gesswein over at https://www.pdp8.net/mfm/mfm.shtml to read the drive. I had hoped to find that it was from an old Rainbow and I'd complete my collection of drivers for third party drives...

Much to my surprise, I was able to read it without any errors until it hit the manufacturing tracks (480-489). I pulled a full image, then downloaded it to my FreeBSD box for analysis.

hexdump -C told me it was a CP/M disk (I recognized the directory format). It was clear right away it wasn't a DEC Rainbow disk, however.

The first thing I noticed was the "Bi-Tech Multi Drive Support V4.02" string which indicated who made the driver for it. I also noticed strings like the following
PT.COM for EPSON QX-10 PeachText 5000 date changed - 02/03/84

and similar references to the QX-10 or EPSON CP/M.

So, this was from a Epson QX-10 CP/M system. Looks to be a soft-water service company from South Bend Indiana. All their books and correspondence from the mid 1980s was on it, along with some interesting disk support software. There's even some bits of Z80 assembler, but they are too disjointed to know what they were for.

I've not been able to get cpmtools to read the disk in a structured way, however, so it's hard to share just the interesting bits. Still working on it.

If you have one of these machines, or are interested in preserving software from it, please let me and we may be able to work something out.


 

 

by Warner Losh (noreply@blogger.com) at January 31, 2021 07:59 AM

January 16, 2021

FreeBSD

October-December 2020 Status Report

The October to December 2020 Status Report is now available with 42 entries.

January 16, 2021 08:00 AM

January 14, 2021

FreeBSD

December 28, 2020

Adrian Chadd

Repairing and bootstrapping an IBM PC/AT 5170, Part 3

So! In Parts 1 and 2 I covered getting this old thing cleaned up, getting it booting, hacking together a boot floppy disc and getting a working file transfer onto a work floppy disc.

Today I'm going to cover what it took to get it going off of a "hard disk", which in 2020 can look quite a bit different than 1990.

First up - what's the hard disk? Well, in the 90s we could still get plenty of MFM and early IDE hard disks to throw into systems like this. In 2020, well, we can get Very Large IDE disks from NOS (like multi hundred gigabyte parallel ATA interface devices), but BIOSes tend to not like them. The MFM disks are .. well, kinda dead. It's understandable - they didn't exactly build them for a 40 year shelf life.

The IBM PC, like most computers at the time, allow for peripherals to include software support in ROM for the hardware you're trying to use. For example, my PC/AT BIOS doesn't know anything about IDE hardware - it only knows about ye olde ST-412/ST-506 Winchester/MFM drive controllers. But contemporary IDE hardware would include the driver code for the BIOS in an on-board ROM, which the BIOS would enumerate and use. Other drive interconnects such as SCSI did the same thing.

By the time the 80386's were out, basic dumb IDE was pretty well supported in BIOSes as, well, IDE is really code for "let's expose some of the 16 bit ISA bus on a 40 pin ribbon cable to the drives". But, more about that later.

Luckily some electronics minded folk have gone and implemented alternatives that we can use. Notably:

  • There's now an open-source "Universe IDE BIOS" available for computers that don't have IDE support in their BIOS - notably PC/XT and PC/AT, and
  • There are plenty of projects out there which break out the IDE bus on an XT or AT ISA bus - I'm using XT-IDE.
Now, I bought a little XT-IDE + compact flash card board off of ebay. They're cheap, it comes with the universal IDE bios on a flash device, and ...

... well, I plugged it in and it didn't work. So, I wondered if I broke it. I bought a second one, as I don't have other ISA bus computers yet, and ...

IT didn't work. Ok, so I know that there's something up with my system, not these cards. I did the 90s thing of "remove all IO cards until it works" in case there was an IO port conflict and ...

.. wham! The ethernet card. Both wanted 0x300. I'd have to reflash the Universal IDE BIOS to get it to look at any other address, so off I went to get the Intel Etherexpress 8/16 card configuration utility.

Here's an inside shot of the PC/AT with the XT-IDE installed, and a big gaping hole where the Intel EtherExpress 8/16 NIC should be.





No wait. What I SHOULD do first is get the XT-IDE CF card booting and running.

Ok, so - first things first. I had to configure the BIOS drive as NONE, because the BIOS isn't servicing the drive - the IDE BIOS is. Unfortunately, the IDE BIOS is coming in AFTER the system BIOS disks, so I currently can't run MFM + IDE whilst booting from IDE. I'm sure I can figure out how at some point, but that point is not today.

Success! It boots!

To DOS 6.22!

And only the boot sector, and COMMAND.COM! Nooooo!

Ok so - I don't have a working 3.5" drive installed, I don't have DOS 6.22 media on 1.2MB, but I can copy my transfer program (DSZ) - and Alley Cat - onto the CF card. But - now I need the DOS 6.22 install media.

On the plus side - it's 2020 and this install media is everywhere. On the minus side - it's disk images that I can't easily use. On the double minus side - the common DOS raw disk read/write tool - RAWREAD/RAWRITE - don't know about 5.25" drives! Ugh!

However! Here's where a bit of hilarious old knowledge is helpful - although the normal DOS installers want to be run from floppy, there's a thing called the "DOS 6.22 Upgrade" - and this CAN be run from the hard disk. However! You need a blank floppy for it to write the "uninstallation" data to, so keep one of those handy.

I extracted the files from the disk images using MTOOLS - "MCOPY -i DISK.IMG ::*.* ." to get everything out  - used PKZIP and DSZ to get it over to the CF card, and then ran the upgrader.


Hello DOS 6.22 Upgrade Setup!


Ah yes! Here's the uninstall disc step! Which indeed I had on hand for this very moment!


I wonder if I should fill out the registration card for this install and send it to Microsoft.

Ok, so that's done and now I have a working full DOS 6.22 installation. Now I can do all the fun things like making a DOS boot disk and recovery things. (For reference - you do that using FORMAT /S A: to format a SYSTEM disk that you can boot from; then you add things to it using COPY.)

Finally, I made a boot disk with the Intel EtherExpress 8/16 config program on it, and reconfigured my NIC somewhere other than 0x300. Now, I had to open up the PC/AT, remove the XT-IDE and install the EtherExpress NIC to do this - so yes, I had to boot from floppy disc.





Once that was done, I added a bunch of basic things like Turbo C 2.0, Turbo Assembler and mTCP. Now, mTCP is a package that really needed to exist in the 90s. However, this and the RAM upgrade (which I don't think I've talked about yet!) will come in the next installment of "Adrian's remembering old knowledge from his teenage years!".

by Adrian (noreply@blogger.com) at December 28, 2020 11:45 PM

December 17, 2020

Adrian Chadd

Repairing and bootstrapping an IBM 5170 PC/AT, part 2

Ok, so now it runs. But, what did it take to get here?

First up - I'm chasing down a replacement fusable PROM and I'll likely have to build a programmer for it. The programmer will need to run a bit at a time, which is very different to what the EPROM programmers available today support. It works for now, but I don't like it.

I've uploaded a dump of the PROM here - https://erikarn.github.io/pcat/notes.html .

Here's how the repair looks so far:



Next - getting files onto the device. Now, remember the hard disk is unstable, but even given that it's only DOS 5.0 which didn't really ship with any useful file transfer stuff. Everyone expected you'd have floppies available. But, no, I don't have DOS available on floppy media! And, amusingly, I don't have a second 1.2MB drive installed anywhere to transfer files.

I have some USB 3.5" drives that work, and I have a 3.5" drive and Gotek drive to install in the PC/AT. However, until yesterday I didn't have a suitable floppy cable - the 3.5" drive and Gotek USB floppy thingy both use IDC pin connectors, and this PC/AT uses 34 pin edge connectors. So, whatever I had to do, I had to do with what I had.

There are a few options available:

  • You can write files in DOS COMMAND.COM shell using COPY CON <file> - it either has to be all ascii, or you use ALT-<3 numbers> to write ALT CODES. For MS-DOS, this would just input that value into the keyboard buffer. For more information, Wikipedia has a nice write-up here: https://en.wikipedia.org/wiki/Alt_code .
  • You can use an ASCII only assembly as above: a popular one was TCOM.COM, which I kept here: https://erikarn.github.io/pcat/tcomtxt.asm
  • If you have MODE.COM, you could try setting up the serial port (COM1, COM2, etc) to a useful baud rate, turn on flow control, etc - and then COPY COM1 <file>. I didn't try this because I couldn't figure out how to enable hardware flow control, but now that I have it all (mostly) working I may give it a go.
  • If you have QBASIC, you can write some QBASIC!
I tried TCOM.COM, both at 300 and 2400 baud. Both weren't reliable, and there's a reason it isn't - writing to the floppy is too slow! Far, far too slow! And, it wasn't enforcing hardware flow control, which was very problematic for reliable transfers.

So, I wrote some QBASIC. It's pretty easy to open a serial port and read/write to it, but it's not AS easy to have it work for binary file transfer. There are a few fun issues:

  • Remember, DOS (and Windows too, yay!) has a difference between files open for text reading/writing and files open for binary reading/writing.
  • QBASIC has sequential file access or random file access. For sequential, you use INPUT/PRINT, for random you use GET and PUT.
  • There's no byte type - you define it as a STRING type of a certain size.
  • This is an 8MHz 80286, and .. well, let's just say QBASIC isn't the fastest thing on the planet here.
I could do some basic IO fine, but I couldn't actually transfer and write out the file contents quickly and reliably. Even going from 1200 to 4800 and 9600 baud didn't increase the transfer rate! So even given an inner loop of reading/writing a single byte at a time with nothing else, it still can't keep up.

The other amusingly annoying thing is what to use on the remote side to send binary files. Now, you can use minicom and such on FreeBSD/Linux, but it doesn't have a "raw" transfer type - it has xmodem, ymodem, zmodem and ascii transfers. I wanted to transfer a ~ 50KB binary to let me do ZMODEM transfers, and .. well, this presents a bootstrapping problem.

After a LOT of trial and error, I ended up with the following:

  • I used tip on FreeBSD to talk to the serial port
  • I had to put "hf=true" into .tiprc to force hardware handshaking; it didn't seem to work when I set it after I started tip (~s to set a variable)
  • On the QBASIC side I had to open it up with hardware flow control to get reliable transfers;
  • And I had to 128 byte records - not 1 byte records - to get decent transfer performance!
  • On tip to send the file I would ask it to fork 'dd' to do the transfer (using ~C) and asking it to pad to the 128 byte boundary:
    • dd if=file bs=128 conv=sync
The binary I chose (DSZ.COM) didn't mind the extra padding, it wasn't checksumming itself.

Here's the hacky QBASIC program I hacked up to do the transfer:

OPEN "RB", #2, "MYFILE.TXT", 128

' Note: LEN = 128 is part of the OPEN line, not a separate line!
OPEN "COM1:9600,N,8,1,CD0,CS500,DS500,OP0,BIN,TB2048,RB32768" FOR RANDOM AS #1 LEN = 128

size# = 413 '413 * 128 byte transfer
DIM c AS STRING * 128 ' 128 byte record
FOR i = 1 TO size#
  GET #1, , c
  PUT #2, , c
NEXT i
CLOSE #2
CLOSE #1

Now, this is hackish, but specifically:
  • 9600 baud, 8N1, hardware flow control, 32K receive buffer.
  • 128 byte record size for both the file and UART transfers.
  • the DSZ.COM file size, padded to 128 bytes, was 413 blocks. So, 413 block transfers.
  • Don't forget to CLOSE the file once you've written, or DOS won't finalise the file and you'll end up with a 0 byte file.
This plus tip configured for 9600 and hardware flow control did the right thing. I then used DSZ to use ZMODEM to transfer a fresh copy of itself, and CAT.EXE (Alley Cat!)

Ok, so that bootstrapped enough of things to get a ZMODEM transfer binary onto a bootable floppy disc containing a half-baked DOS 5.0 installation. I can write software with QBASIC and I can transfer files on/off using ZMODEM.

Next up, getting XT-IDE going in this PC/AT and why it isn't ... well, complete.



by Adrian (noreply@blogger.com) at December 17, 2020 08:49 PM

December 16, 2020

Oshogbo

How to configure a network dump in FreeBSD?

A network dump might be very useful for collecting kernel crash dumps from embedded machines and machines with a larger amount of RAM then available swap partition size. Besides net dumps we can also try to compress the core dump. However, often this may still not be enough swap to keep whole core dump. In such situation using network dump is a convenient and reliable way for collecting kernel dump.

December 16, 2020 11:05 PM

[PL] How play Mario?

Ugh, zaczyna się robić tłoczno, drugi post w przeciągu dwóch miesięcy, nie jest źle ;](warto dodac że post pojawia się za namową pewnej osoby (Pozdro Gyn ;>))... Dzisiaj opisze projekt nad którym ostatnio pracowałem. Pomysł użycia tej technologi chodził za mną już od igk ale dopiero ostatnio zrobiłem coś co by się nadawało do pokazania publicznie.

December 16, 2020 11:05 PM

November 06, 2020

Neiracs

How to run bhyve in a jail

I’ll setup a jail dedicated to run bhyve vms , for jail creation I’ll use bastillebsd

Install bastillebsd to create and manage jails.

# pkg install bastillebsd

Setup bastillebsd

Follow the getting started guide at https://bastillebsd.org/getting-started/
I’m using zfs so /usr/local/etc/bastille/bastille.conf I must edit bastille.conf (this must be done before bootstraping a release).

I used this on bastille.conf.

bastille_zfs_enable="YES"
bastille_zfs_zpool="zroot"

Create a set of rules to allow to run bhyve inside a jail edit /etc/devfs.rules, create it if does not exists.

[devfs_rules_bhyve_jail=25]
add include $devfsrules_jail
add path vmm unhide
add path vmm/* unhide
add path tap* unhide
add path nmdm* unhide

Create a new jail that will be use these rules.

sudo bastille create --vnet test-bhyve 12.2-RELEASE 192.168.1.225 em0

Modify test-bhyve jail.conf for this jail:

sudo bastille edit test-bhyve

Now add

allow.vmm;

So the jail.conf will look like:

test-bhyve {
  devfs_ruleset =25;
  enforce_statfs = 2;
  exec.clean;
  exec.consolelog = /var/log/bastille/test-bhyve_console.log;
  exec.start = '/bin/sh /etc/rc';
  exec.stop = '/bin/sh /etc/rc.shutdown';
  host.hostname = test-bhyve;
  mount.devfs;
  mount.fstab = /usr/local/bastille/jails/test-bhyve/fstab;
  path = /usr/local/bastille/jails/test-bhyve/root;
  securelevel = 2;
  allow.vmm;
  allow.raw_sockets;
  vnet;
  vnet.interface = e0b_bastille0;
  exec.prestart += "jib addm bastille0 em0";
  exec.poststop += "jib destroy bastille0";
}
~

Load the required modules:

kldload vmm
kldload nmdm

Start test-bhyve jail

sudo bastille start test-bhyve

Go inside the new jail

sudo bastille console test-bhyve

Now install vm-bhyve inside the jail follow the vm-bhyve setup at https://github.com/churchers/vm-bhyve

pkg install vm-bhyve
sysrc vm_enable="YES"
mkdir /vms
sysrc vm_dir="/vms"
vm init
cp /usr/local/share/examples/vm-bhyve/* /vms/.templates/
vm switch create public
vm switch add public vtnet0
vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/12.2/FreeBSD-12.2-RELEASE-amd64-bootonly.iso
vm create test
vm install -f test FreeBSD-12.2-RELEASE-amd64-bootonly.iso

by default a vm-bhyve vm has 256mb, if you need more you will need to run

vm config test

And configure how much RAM do you need.

Issues

If you are going to use dhcp on the vm, you will need to configure the interface to
use SYNCDHCP.
According to this post using SYNCDHCP works, but we need to reboot the vm first.

This gives me an interface named vnet0 in my jails that i can then configure through the jail’s rc.conf. for some reason SYNCDHCP works but not DHCP in the guest rc.conf.

References

by cneirabustos at November 06, 2020 06:38 PM

October 20, 2020

Warner Losh

How to Recover From a BIOS Upgrade

Recovering From Firmware Upgrade

Recently, I booted Windows on my laptop for the first time in a while to play Portal 2 with my son. It asked me to upgrade, and I said 'sure, upgrade the BIOS.'

And then I couldn't boot FreeBSD...  The BIOS upgrade deleted all the BootXXXX variables. So it only booted Windows. I'm stuck, right? I have to download a FreeBSD image and boot off the USB drive. Or did I?

Note: Even the update program called updating the firmware updating the BIOS. BIOS is the generic term for the bit of code that runs before the OS. Sadly, it's also the term people use to describe the pre-UEFI boot environment on PCs, so it can be a confusing term to use. Firmware seems a bit better, but it's also ambiguous because different bits of hardware (like wireless cards) also need firmware loaded.

How to Recover

I was in windows. I needed to mount the system partition (EFI). So, I opened the Administrative Console and got a command prompt from there on the 'Tools' tab. This lead to the familiar C: prompt.  I have no W drive. There I was able to copy FreeBSD's boot loader like so:

C:\WINDOWS\system32> mountvol w: /s
C:\WINDOWS\system32>w:
W:\> cd EFI\Microsoft\Boot
W:\EFI\Microsoft\Boot> ren bootmgfw.efi bootmgfw-back.efi
W:\EFI\Microsoft\Boot> copy W:\EFI\FreeBSD\loader.efi bootmgfw.efi
W:\EFI\Micorsoft\Boot> 

I then rebooted from the menu.

I had remembered from my efibootmgr hacking that the boot loader was here. After I booted to FreeBSD, I was able to confirm:

% sudo efibootmgr -v
Boot to FW : false
BootCurrent: 0001
Timeout    : 0 seconds
BootOrder  : 0001, 2001, 2002, 2003
+Boot0001* Windows Boot Manager HD(1,GPT,f859c46d-19ee-4e40-8975-3ad1ab00ac09,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
                                   nvd0p1:/EFI/Microsoft/Boot/bootmgfw.efi /boot/efi//EFI/Microsoft/Boot/bootmgfw.efi
 Boot2001* EFI USB Device 
 Boot2002* EFI DVD/CDROM 
 Boot2003* EFI Network 


Unreferenced Variables:

I was then able to add FreeBSD back with efibootmgr. I mount the ESP on /boot/efi:

sudo efibootmgr --create --loader /boot/efi/EFI/freebsd/loader.efi --kernel /boot/kernel/kernel --activate --verbose --label FreeBSD
Boot to FW : false
BootCurrent: 0001
Timeout    : 0 seconds
BootOrder  : 0000, 0001, 2001, 2002, 2003
 Boot0000* FreeBSD HD(1,GPT,f859c46d-19ee-4e40-8975-3ad1ab00ac09,0x800,0x82000)/File(\EFI\freebsd\loader.efi)
               nvd0p1:/EFI/freebsd/loader.efi /boot/efi//EFI/freebsd/loader.efi
           HD(6,GPT,68f0614d-c322-11e9-857a-b1710dd81c0d,0x7bf1000,0x1577e000)/File(boot\kernel\kernel)
               nvd0p6:boot/kernel/kernel /boot/kernel/kernel
+Boot0001* Windows Boot Manager HD(1,GPT,f859c46d-19ee-4e40-8975-3ad1ab00ac09,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
                                   nvd0p1:/EFI/Microsoft/Boot/bootmgfw.efi /boot/efi//EFI/Microsoft/Boot/bootmgfw.efi
 Boot2001* EFI USB Device 
 Boot2002* EFI DVD/CDROM 
 Boot2003* EFI Network 


Unreferenced Variables:
%

Once this is in place, I needed to undo what I'd done to Windows:

% cd /boot/efi/EFI/Microsoft/Boot
% sudo mv bootmgfw-back.efi bootmgfw.efi

I was then able to reboot to FreeBSD. And fun fact: since the boot order is 0000, 0001, that means I can boot to Windows by just typing 'quit' at the loader prompt. This causes the boot loader to exit with an error, which causes the BIOS to try the next BootXXXX variable, in this case windows.

And there it is: I was able to recover my system without downloading a USB image...


by Warner Losh (noreply@blogger.com) at October 20, 2020 04:42 PM

October 03, 2020

Alexander Leidinger

Self-signed certificates and LDAPS (OpenLDAP) in PHP (or python)

This is not about how to generate a self-signed certificate, this is about how to configure an ldap client to connect securely to a ldap server which has a self-signed certificate.

Recently I was searching a lot how to make this kind of setup work, but it seems nobody is using the keywords of the headline in their HOWTOs, or everyone is not really setting up a really secure connection with self-signed certificates. As such here my try to document this for those which are interested in a secure setup.

How OpenLDAP is checking the certificates normally

OpenLDAP is using the certificate store which is configured for OpenSSL. So any certificate which is signed by one of the CAs in the OpenSSL cert-ctore are trusted.

Secure setup

Most of the time you do not expose an LDAP server to the outside where a certificate from one of the trusted-by-default CAs is needed. A certificate from your internal CA is enough, and in some cases a self-signed certificate is sufficient too.

An easy solution could be to add either the root-certificate of your CA or the self-signed certificate into the trust-store of OpenSSL (not every OS / distribution has this in the same location, you have to check where this is for your OS, for FreeBSD 13+ this is /usr/local/etc/ssl/certs/, see also certctl(8) there). But this would mean you trust the certitifacate which you put there additionally to the default certificates (modulo any blacklisting you made yourself). Theoretically this means anyone who is able to get hold of a certificate from a public-CA for your LDAP server, could perform a man-in-the-middle attack (you need to consider yourself how feasible this is in your infrastructure setup and how likely this is to happen).

More secure operation

Let’s say you run a service which needs to be able to make TLS sessions to systems which use certificates from public CAs and you want to make sure a connection to the LDAP backend can not use certificates from public CAs.

To tighten the setup in this case, you need to specify that the client which uses OpenLDAP-client libraries is using a different trust-store for the certifcate validation.

For the openldap client utilities there is a global config file for this (on FreeBSD this is /usr/local/etc/openldap/ldap.conf). For other tools, like PHP, this needs to be done in the per-user config file ~/.ldaprc. Both file have the same syntax.

With php-ldap you normally run the service either in php-fpm or in an apache-php-module. In both cases the process which runs is configured to run as a non-root user which may or may not have a home directory (in FreeBSD the www user which is typically used for that has no home directory).

HOWTO

  1. create a home directory
  2. create a separate trust-store for LDAP
  3. configure php-ldap / py-ldap to make use of the separate trust-store

Step 1 – create a home directory

Chose a place which is suitable, and create a directory there. It doesn’t need to be in /home, it can be anywhere. The important part is, that it is readable by the user which runs the application which is using php-ldap. It does not need to be writable by this user. In there you need to create the .ldaprc file (again, needs only be readable by the user) with the content from step 3.

Step 2 – create a separate trust-store for LDAP

In FreeBSD the global ldap config is in /usr/local/etc/openldap/ldap.conf. Theoretically you can put the trust-store for LDAP in any place wou want. In my setup I consider it to belong into /usr/local/etc/openldap/ssl/. So make a directory – like /usr/local/etc/openldap/ssl – for the trust-store, and copy the certificate of the LDAP server there.

Attention! Only the public certificate, not the private key! If you only have one file on the server for this, it is the combined key+certificate (if you don’t know or are able to deduct by looking into the file how to get rid of the key… there is a lot of info out there in the WWW which explains it). The directory and the certificate need to be accessible (read for the file, execute for the directory) by any user which shall make use of this. It does not hurt to have it accessible by everyone (you made sure there is not the private-key from the server, right?).

Step 3 – configure php-ldap / py-ldap to make use of the separate trust-store

If you use php-fpm, you need to configure a home directory in the FPM pool configureation section. As already said above, it does not need to be inside /home, but it dpends upon your needs. Here in this example let me use /home. The FPM config line to add is then something like:
env[HOME] = /home/php-fpm
You could achieve the same via changing the home directory in the password database, but this would have an effect on all processes run with this user, whereas here it is just for the php-fpm processes (and childs).

If you use apache instead of php-fpm, you need to configure something similar for the corresponding virtual host:
SetEnv HOME /home/php-fpm

With this you can now configure /home/php-fpm/.ldaprc to point to the LDAP trust-store:
TLS_CACERT /usr/local/etc/openldap/ssl/ldap_server_cert.pem
TLS_CACERTDIR /usr/local/etc/openldap/ssl

If you use some python based application, you have to do something similar… if all else fails, it needs to be via a real home directory in the password database.

If you want to use the ldap client tools with any user, you need to add those lines to the /usr/local/etc/openldap/ldap.conf file too (there you can also set the default BASE – e.g. “BASE dc=example,dc=com” – and URI – e.g. “URI ldaps://ldap.example.com:639″).

After restarting php-fpm or apache, you should now be able to make really secure connections to the ldap server.

Some important things

  • Every time you change the certificate of the LDAP server, you need to update the certifacte in the clients.
  • There are two TLS modes for the LDAP server, one is “ldaps”, and one is “ldap+starttls”. If you have your LDAP server running in ldaps-mode (typically on port 639), you do not need to specify in your php-ldap using application to enable TLS (which is doing a starttls after connecting… typically on port 389), but you need to specify “ldaps://servername:639” (assuming it runs on port 639) instead of just “servername” at the place in your application where you are told to enter the server name. For py-ldap I have checked just one application (netdata), and there TLS needs to be enabled, and the server name has to be without “ldaps://” as netdata is prefixing the “ldaps://” itself if tls is enabled.
  • Some places in the internet are telling to add “TLS_REQCERT never” into ldap.conf / .ldaprc. Technically this is not needed. Depending on your point of view this can either be good or bad (specifying it saves some CPU cycles on the server and the client, and some transfer time over the network – not specifying it allows to validate the certificated received to be compared to the certifcate being available locally, but I do not know if OpenLDAP is doing this, nor did I spend some time to evaluate if this improves security (if the important parts of the certificate are out-of-sync, the connection will fail)).
Send to Kindle

Share/Save

by netchild at October 03, 2020 11:08 AM

September 20, 2020

Colin Percival

On the use of a life

In a recent discussion on Hacker News, a commenter posted the following question:
Okay, so, what do we think about TarSnap? Dude was obviously a genius, and spent his time on backups instead of solving millennium problems. I say that with the greatest respect. Is this entrepreneurship thing a trap?
I considered replying in the thread, but I think it deserves an in-depth answer — and one which will be seen by more people than would notice a reply in the middle of a 100+ comment thread.

September 20, 2020 10:10 PM

July 30, 2020

Gonzo

Audio subsystem hardware internals

I wrote an introductory article on how the audio subsystem on SBCs work: CODECs, I2S, DTS, whole nine yards. WordPress editor didn’t seem to be a very convenient tool for this kind of write up so I gave asciidoc a try and so far liked it.

Link to the article: https://kernelnomicon.org/texts/sbc-audio.html

by gonzo at July 30, 2020 07:13 AM

June 04, 2020

Gonzo

yubikey-agent on FreeBSD

Some time ago Filippo Valsorda wrote yubikey-agent, seamless SSH agent for YubiKeys. I really like YubiKeys and worked on the FreeBSD support for U2F in Chromium and pyu2f, getting yubikey-agent ported looked like an interesting project. It took some hacking to make it work but overall it wasn’t hard. Following is the roadmap on how to get it set up on FreeBSD. The actual details depend on your system (as you will see)

The first step is to set up middleware for accessing smart cards (YubiKey implements CCID smart-card protocol). The pcsc-lite package provides a daemon and a library for clients to communicate with the daemon. ccid is a plugin for pcsc-lite that implements the actual CCID protocol over USB. devd rules are to make the daemon re-scan USB devices on hotplug

sudo pkg install ccid pcsc-lite
sudo mkdir -p /usr/local/etc/devd
sudo tee /usr/local/etc/devd/pcscd.conf << __EOF__
attach 100 {
        device-name "ugen[0-9]+";
        action "/usr/local/sbin/pcscd -H";
};
detach 100 {
        device-name "ugen[0-9]+";
        action "/usr/local/sbin/pcscd -H";
};
__EOF__

sudo service devd restart
sudo sysrc pcscd_enable="YES
sudo service pcscd start

go and git are build requirements for the app. go get command is required because FreeBSD support was only recently merged into piv-go and the latest release, referenced in go.mod, still does not have it. go install command installs the app in ~/go/bin/. When new version of piv-go is released and yubikey-agent switches to using it, all these commands can be replaced with single go get github.com/FiloSottile/yubikey-agent command.

sudo pkg install go git
git clone https://github.com/FiloSottile/yubikey-agent.git
cd yubikey-agent
go get github.com/go-piv/piv-go@a3e5767e
go build
go install

The binary is in ~/go/bin/ directory, you can add it to your $PATH to type less.

The next step is setting up a Yubikey, it’s well documented on the official site. One caveat though is if PIN length is less than 6 chars the setup fails with the somewhat confusing message: “‼ The default PIN did not work.”

The actual usage of yubikey-agent depends on your setup. First of all, yubikey-agent is a “eventually GUI” app. At some point, when entering PIN is required it starts pinentry command which task is to present user with a dialog entry, get PIN, and pass it to the app. There are multiple pinentry flavors, with different front-ends: TTY, Qt, GTK. gopass module used in yubikey-agent does not work with plain TTY backend, and requires pinentry to be a GUI app. On Debian/Ubuntu users can switch between flavors and make /usr/bin/pinentry point to either Qt5 or GTK version, but in FreeBSD /usr/local/bin/pinentry is always TTY one. I worked around it by installing pinentry-qt5 package and making symlink.

sudo pkg install pinentry-qt5
sudo ln -s /usr/local/bin/pinentry-qt5 /usr/local/bin/pinentry

If you need /usr/local/bin/pinentry to be TTY version for some other stuff, there may be a problem. How to work around this depends on your requirements. I don’t have a ready recipe.

Because yubikey-agent is “eventually GUI” you can either start it in .xsession or .xinitrc files or start it some other way with DISPLAY env variable set. Other than that official documentation is a good source of information on how to use it.

by gonzo at June 04, 2020 07:03 AM

March 31, 2020

Erwin Lansing

Enjoy the view

Even if we cannot enjoy the view from the air, we can enjoy it from the ground. Here’s to keeping our spirits at least 30.000 ft high.

KLM KL1128 leaving Copenhagen (CPH)
on March 31, 2020

The post Enjoy the view appeared first on Droso.

by erwin at March 31, 2020 03:12 PM

March 19, 2020

Alexander Leidinger

Fighting the Coronavirus with FreeBSD (Folding@Home)

Photo by Fusion Medical Animation on Unsplash

Here is a quick HOWTO for those which want to provide some FreeBSD based compute resources to help finding vaccines. I have not made a port out of this and do not know yet if I get the time to make one. If someone wants to make a port, go ahead, do not wait for me.

UPDATE 2020-03-22: 0mp@ made a port out of this, it is in “biology/linux-foldingathome”.

  • Download the linux RPM of the Folding@Home client (this covers fahclient only).
  • Enable the linuxulator (kernel moduls and linux_base (first part of chapter 10.2) is enough).
  • Make sure linprocfs/linsysfs are mounted in /compat/linux/{proc|sys}.
  • cd /compat/linux
  • tar -xf /path/to/fahclient....rpm
  • add the “fahclient” user (give it a real home directory)
  • make sure there is no /compat/linux/dev or alternatively mount devfs there
  • mkdir /compat/linux/etc/fahclient
  • cp /compat/linux/usr/share/doc/fahclient/sample-config.xml /compat/linux/etc/fahclient/config.xml
  • chown -R fahclient /compat/linux/etc/fahclient
  • edit /compat/linux/fahclient/config.xml: modify user (mandatory) / team (optional: FreeBSD team is 11743) / passkey (optional) as appropriate (if you want to control the client remotely, you need to modify some more parts, but somehow the client “loses” a filedescriptor and stops working as it should if you do that on FreeBSD)
  • If you have the home directories of the users as no-exec (e.g. seperate ZFS datasets with exec=off): make sure the home directory of the fahclient user has exec permissions enabled
  • cd ~fahclient (important! it tries to write to the current work directory when you start it)
  • Start it: /usr/sbin/daemon /compat/linux/usr/bin/FAHClient /compat/linux/etc/fahclient/config.xml --run-as fahclient --pid-file=/var/run/fahclient.pid >/dev/null 2>&1

Per default it will now pick up some SARS-CoV‑2 (COVID-19) related folding tasks. There are some more config options (e.g. how much of the system resources are used). Please refer to the official Folding@Home site for more information about that. Be also aware that there is a big rise in compute resources donated to Folding@Home, so the pool of available work units may be empty from time to time, but they are working on adding more work units. Be patient.

Send to Kindle

Share/Save

by netchild at March 19, 2020 08:47 AM

October 22, 2018

Dag-Erling Smørgrav

DNS over TLS in FreeBSD 12

With the arrival of OpenSSL 1.1.1, an upgraded Unbound, and some changes to the setup and init scripts, FreeBSD 12.0, currently in beta, now supports DNS over TLS out of the box.

DNS over TLS is just what it sounds like: DNS over TCP, but wrapped in a TLS session. It encrypts your requests and the server’s replies, and optionally allows you to verify the identity of the server. The advantages are protection against eavesdropping and manipulation of your DNS traffic; the drawbacks are a slight performance degradation and potential firewall traversal issues, as it runs over a non-standard port (TCP port 853) which may be blocked on some networks. Let’s take a look at how to set it up.

Basic setup

As a simple test case, let’s set up our 12.0-ALPHA10 VM to use Cloudflare’s DNS service:

# uname -r
12.0-ALPHA10
# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="YES"
local_unbound_forwarders="1.1.1.1@853 1.0.0.1@853"
EOF
# service local_unbound start
Performing initial setup.
destination:
/var/unbound/forward.conf created
/var/unbound/lan-zones.conf created
/var/unbound/control.conf created
/var/unbound/unbound.conf created
/etc/resolvconf.conf not modified
Original /etc/resolv.conf saved as /var/backups/resolv.conf.20181021.192629
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
www.freebsd.org is an alias for wfe0.nyi.freebsd.org.
wfe0.nyi.freebsd.org has address 96.47.72.84
wfe0.nyi.freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
wfe0.nyi.freebsd.org mail is handled by 0 .

Note that this is not a configuration you want to run in production—we will come back to this later.

Performance

The downside of DNS over TLS is the performance hit of the TCP and TLS session setup and teardown. We demonstrate this by flushing our cache and (rather crudely) measuring a cache miss and a cache hit:

# local-unbound-control reload
ok
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.553 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.005 total

Compare this to querying our router, a puny Soekris net5501 running Unbound 1.8.1 on FreeBSD 11.1-RELEASE:

# time host www.freebsd.org gw >x
host www.freebsd.org gw > x 0.00s user 0.00s system 0% cpu 0.232 total
# time host www.freebsd.org 192.168.144.1 >x
host www.freebsd.org gw > x 0.00s user 0.00s system 0% cpu 0.008 total

or to querying Cloudflare directly over UDP:

# time host www.freebsd.org 1.1.1.1 >x      
host www.freebsd.org 1.1.1.1 > x 0.00s user 0.00s system 0% cpu 0.272 total
# time host www.freebsd.org 1.1.1.1 >x
host www.freebsd.org 1.1.1.1 > x 0.00s user 0.00s system 0% cpu 0.013 total

(Cloudflare uses anycast routing, so it is not so unreasonable to see a cache miss during off-peak hours.)

This clearly shows the advantage of running a local caching resolver—it absorbs the cost of DNSSEC and TLS. And speaking of DNSSEC, we can separate that cost from that of TLS by reconfiguring our server without the latter:

# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="NO"
local_unbound_forwarders="1.1.1.1 1.0.0.1"
EOF
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.205328
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
Original /var/unbound/unbound.conf saved as /var/backups/unbound.conf.20181021.205328
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound start
Starting local_unbound.
Waiting for nameserver to start... good
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.080 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total

So does TLS add nearly half a second to every cache miss? Not quite, fortunately—in our previous tests, our first query was not only a cache miss but also the first query after a restart or a cache flush, resulting in a complete load and validation of the entire path from the name we queried to the root. The difference between a first and second cache miss is quite noticeable:

# time host www.freebsd.org >x 
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.546 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total
# time host repo.freebsd.org >x
host repo.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.168 total
# time host repo.freebsd.org >x
host repo.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total

Revisiting our configuration

Remember when I said that you shouldn’t run the sample configuration in production, and that I’d get back to it later? This is later.

The problem with our first configuration is that while it encrypts our DNS traffic, it does not verify the identity of the server. Our ISP could be routing all traffic to 1.1.1.1 to its own servers, logging it, and selling the information to the highest bidder. We need to tell Unbound to validate the server certificate, but there’s a catch: Unbound only knows the IP addresses of its forwarders, not their names. We have to provide it with names that will match the x509 certificates used by the servers we want to use. Let’s double-check the certificate:

# :| openssl s_client -connect 1.1.1.1:853 |& openssl x509 -noout -text |& grep DNS
DNS:*.cloudflare-dns.com, IP Address:1.1.1.1, IP Address:1.0.0.1, DNS:cloudflare-dns.com, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:1001

This matches Cloudflare’s documentation, so let’s update our configuration:

# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="YES"
local_unbound_forwarders="1.1.1.1@853#cloudflare-dns.com 1.0.0.1@853#cloudflare-dns.com"
EOF
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.212519
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
www.freebsd.org is an alias for wfe0.nyi.freebsd.org.
wfe0.nyi.freebsd.org has address 96.47.72.84
wfe0.nyi.freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
wfe0.nyi.freebsd.org mail is handled by 0 .

How can we confirm that Unbound actually validates the certificate? Well, we can run Unbound in debug mode (/usr/sbin/local-unbound -dd -vvv) and read the debugging output… or we can confirm that it fails when given a name that does not match the certificate:

# perl -p -i -e 's/cloudflare/cloudfire/g' /etc/rc.conf.d/local_unbound
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.215808
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Waiting for PIDS: 33977.
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
Host www.freebsd.org not found: 2(SERVFAIL)

But is this really a failure to validate the certificate? Actually, no. When provided with a server name, Unbound will pass it to the server during the TLS handshake, and the server will reject the handshake if that name does not match any of its certificates. To truly verify that Unbound validates the server certificate, we have to confirm that it fails when it cannot do so. For instance, we can remove the root certificate used to sign the DNS server’s certificate from the test system’s trust store. Note that we cannot simply remove the trust store entirely, as Unbound will refuse to start if the trust store is missing or empty.

While we’re talking about trust stores, I should point out that you currently must have ca_root_nss installed for DNS over TLS to work. However, 12.0-RELEASE will ship with a pre-installed copy.

Conclusion

We’ve seen how to set up Unbound—specifically, the local_unbound service in FreeBSD 12.0—to use DNS over TLS instead of plain UDP or TCP, using Cloudflare’s public DNS service as an example. We’ve looked at the performance impact, and at how to ensure (and verify) that Unbound validates the server certificate to prevent man-in-the-middle attacks.

The question that remains is whether it is all worth it. There is undeniably a performance hit, though this may improve with TLS 1.3. More importantly, there are currently very few DNS-over-TLS providers—only one, really, since Quad9 filter their responses—and you have to weigh the advantage of encrypting your DNS traffic against the disadvantage of sending it all to a single organization. I can’t answer that question for you, but I can tell you that the parameters are evolving quickly, and if your answer is negative today, it may not remain so for long. More providers will appear. Performance will improve with TLS 1.3 and QUIC. Within a year or two, running DNS over TLS may very well become the rule rather than the experimental exception.

by Dag-Erling Smørgrav at October 22, 2018 09:36 AM

April 09, 2018

Dag-Erling Smørgrav

Twenty years

Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today.

My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD.

My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year.

In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices.

Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends.

For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it.

I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon.

Previously

by Dag-Erling Smørgrav at April 09, 2018 08:35 PM

February 05, 2018

Remko Lodder

Reponse zones in BIND (RPZ/Blocking unwanted traffic).

A while ago, my dear colleague Mattijs came with an interesting option in BIND. Response zones. One can create custom "zones" and enforce a policy on that.

I never worked with it before, so I had no clue at all what to expect from it. Mattijs told me how to configure it (see below for an example) and offered to slave his RPZ policy-domains.

All of a sudden I was no longer getting a lot of ADS/SPAM and other things. It was filtered. Wow!

His RPZ zones were custom made and based on PiHole, where PiHole adds hosts to the local "hosts" file and sends it to 127.0.0.1 (your local machine), which prevents it to reach the actual server at all, RPZ policies are much stronger and more dynamic.

RPZ policies offer the use of "redirecting" queries. What do I mean with that? well you can force a ADVERTISEMENT (AD for short) site / domain to the RPZ policy and return a NXDOMAIN. It no longer exists for the end-user. But you can also CNAME it to a domain/host you own and then add a webserver to that host and tell the user query'ing the page: "The site you are trying to reach had been pro-actively blocked by the DNS software. This is an automated action and an automated response. If you feel that this is not appropriate, please let us know on <mail link>", or something like that.

Once I noticed that and saw the value, I immediately saw the benefit for companies and most likely schools and home people. Mattijs had a busy time at work and I was recovering from health issues, so I had "plenty" of time to investigate and read on this. The RPZ policies where not updated a lot and caused some problems for my ereaders for example (msftcncsi.com was used by them, see another post on this website for being grumpy about that). And I wanted to learn more about it. So what did I do?

Yes, I wrote my own parser. In perl. I wrote a "rpz-generator" (its actually called like that). I added the sources Mattijs used and generated my own files. They are rather huge, since I blocked ads, malware, fraud, exploits, windows stuff and various other things (gambling, fakenews, and stuff like that).

I also included some whitelists, because msfctinc was added to the lists and it made my ereaders go beserk, and we play a few games here and there which uses some advertisement sites, so we wanted to exempt them as well. It's better to know which ones they are and selectively allow them, then having traffic to every data collector out there.

This works rather well. I do not get a lot of complaints that things are not working. I do see a lot of queries going to "banned" sites everyday. So it is doing something .The most obvious one is that search results on google, not always are clickable. The ones that have those [ADV] sites, are blocked because they are advertising google sponsored sites, and they are on the list.. and google-analytics etc. It doesn't cause much harm to our internet surfing or use experience, with the exception of the ADV sites I just mentioned. My wife sometimes wants to click on those because she searches for something that happends to be on that list, but apart from that we are doing just fine.

One thing though, I wrote my setup and this article with my setup using "NXDOMAIN" which just gives back "site does not exist" messages. I want to make my script more smart by making it a selectable, so that some categories are CNAMED to a filtering domain and webpage, and some are NXDOMAIN'ed. If someone has experience with that, please show me some idea's and how that looks like and whether your end-users can do something with it or not. I think schools will be happy to present a block-page instead of NXdomain'ing some sites 🙂

Acknowledgements: Mattijs for teaching and showing me RPZ, ISC for placing RPZ in NAMED, and zytrax.com for having such excellent documentation to RPZ. The perl developers for having such a great tool around, and the various sites I use to get the blocklists from. Thank you all!

If you want to know more about the tool, please contact me and we can share whatever information is available 🙂

by Remko Lodder at February 05, 2018 11:09 PM

November 25, 2017

Erwin Lansing

120. Red ale

5,5kg Maris Otter
500g Crystal 60L
450g Munich I
100g Chocolate malt

Mash for 75 minutes at 65°C

30g Cascade @ 60 min.
30g Centennial @60 min.
30g Cascade @ 10 min.
30g Centennial @ 10 min.

Bottled at January 7, 2018 with 150g table sugar

White Labs WLP001 California ale yeast
OG: 1.052
FG: 1.006
ABV: 6,0%

The post 120. Red ale appeared first on Droso.

by erwin at November 25, 2017 05:34 PM

August 29, 2017

Remko Lodder

FreeBSD: Using Open-Xchange on FreeBSD

If you go looking for a usable webmail application, then you might end up with Open-Xchange (OX for short). Some larger ISP's are using OX as their webmail application for customers. It has a multitude of options available, using multiple email accounts, caldav/carddav included (not externally (yet?)) etc. There are commercial options available for these ISP's, but also for smaller resellers etc.

But, there is also the community edition available. Which is the installation you can run for free on your machine(s). It does not have some of the fancy modules that large setups need and require, and some updates might follow a bit later which are more directly delivered to paying customers, but it is very complete and usable.

I decided to setup this for my private clients who like to use a webmail client to access their email. At first I ran this on a VM using Bhyve on FreeBSD. The VM ran on CentOS6 and had the necessary bits installed for the OX setup (see: https://oxpedia.org/wiki/index.php?title=AppSuite:Open-Xchange_Installation_Guide_for_CentOS_6). I modified the files I needed to change to get this going, and there, it just worked. But, running on a VM, with ofcourse limited CPU and Memory power assigned (There is always a cap) and it being emulated, I was not very happy with it. I needed to maintain an additional installation and update it, while I have this perfectly fine FreeBSD server instead. (Note that I am not against using bhyve at all, it works very well, but I wanted to reduce my maintenance base a bit :-)).

So a few days ago I considered just moving the stuff over to the FreeBSD host instead. And actually it was rather trivial to do with the working setup on CentOS.

At this moment I do not see an easy way to get the source/components directly from within FreeBSD. I have asked OX for help on this, so that we can perhaps get this sorted out and perhaps even make a Port/pkg out of this for use with FreeBSD.

The required host changes and software installation

The first thing that I did was to create a zfs dataset for /opt. The software is normally installed there, and in this case I wanted to have a contained location which I can snapshot, delete, etc, without affecting much of the normal system. I copied over the /opt/open-xchange directory from my CentOS installation. I looked at the installation on CentOS and noticed that it used a specific user 'open-xchange', which I created on my FreeBSD host. I changed the files to be owned by this user. Getting a process listing on the CentOS machine also revealed that it needed Java/JDK. So I installed the openjdk8 pkg (''pkg install openjdk8''). The setup did not yet start, there were errors about /bin/bash missing. Obviously that required installing bash (''pkg install bash'') and you can go with two ways, you can alter every shebang (#!) to match /usr/local/bin/bash (or better yet #!/usr/bin/env bash), or you can symlink /usr/local/bin/bash to /bin/bash, which is what I did (I asked OX to make it more portable by using the env variant instead).

The /var/log/open-xchange directory does not normally exist, so I created that and made sure that ''open-xchange'' could write to that. (mkdir /var/log/open-xchange && chown open-xchange /var/log/open-xchange).

I was able to startup the /opt/open-xchange/sbin/open-xchange process with that. I could not yet easily reach it, on the CentOS installation there are two files in the Apache configuration that needed some attention on my FreeBSD host. The Apache include files: ox.conf and proxy_http.conf will give away hints about what to change. In my case I needed to do the redirect on the Vhost that runs OX (RedirectMatch ^/$ /appsuite/) and make sure the /var/www/html/appsuite directory is copied over from the CentOS installation as well. You can stick it in any location, as long as you can reach it with your webuser and Alias it to the proper directory and setup directory access).

Apache configuration (Reverse proxy mode)

The proxy_http.conf file is more interesting, it includes the reverse proxy settings to be able to connect to the java instance of OX and service your clients. I needed to add a few modules in Apache so that it could work, I already had several proxy modules enabled for different reasons, so the list below can probably be trimmed a bit to the exact modules needed, but since this works for me, I might as well just show you;

LoadModule slotmem_shm_module libexec/apache24/mod_slotmem_shm.so
LoadModule deflate_module libexec/apache24/mod_deflate.so
LoadModule expires_module libexec/apache24/mod_expires.so
LoadModule proxy_module libexec/apache24/mod_proxy.so
LoadModule proxy_connect_module libexec/apache24/mod_proxy_connect.so
LoadModule proxy_http_module libexec/apache24/mod_proxy_http.so
LoadModule proxy_scgi_module libexec/apache24/mod_proxy_scgi.so
LoadModule proxy_wstunnel_module libexec/apache24/mod_proxy_wstunnel.so
LoadModule proxy_ajp_module libexec/apache24/mod_proxy_ajp.so
LoadModule proxy_balancer_module libexec/apache24/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module libexec/apache24/mod_lbmethod_byrequests.so
LoadModule lbmethod_bytraffic_module libexec/apache24/mod_lbmethod_bytraffic.so
LoadModule lbmethod_bybusyness_module libexec/apache24/mod_lbmethod_bybusyness.so

After that it was running fine for me. My users can login to the application and the local directory's are being used instead of the VM which ran it first. If you notice previous documentation on this subject, you will notice that there are more third party packages needed at that time. It could easily be that there are more modules needed than that I wrote about. My setup was not clean, the host already runs several websites (one of them being this one) and ofcourse support packages were already installed.

Updating is currently NOT possible. The CentOS installation requires running ''yum update'' periodically, but that is obviously not possible on FreeBSD. The packages used within CentOS are not directly usable for FreeBSD. I have asked OX to provide the various Community base and optional modules as .tar.gz files (raw) so that we can fetch them and install them on the proper location(s). As long as the .js/.jar files etc are all there and the scripts are modified to start, it will just work. I have not (yet) created a startup script for this yet. For the moment I will just start the VM and see whether there are updates and copy them over instead. Since I did not need to do additional changing on the main host, it is a very easy and straight forward process in this case.

Support

There is no support for OX on FreeBSD. Ofcourse I would like to see at least some support to promote my favorite OS more, but that is a financial situation. It might not cost a lot to deliver the .tar.gz files so that we can package them and spread the usage of OX on more installations (and thus perhaps add revenue for OX as commercial installation), but it will cost FTE's to support more then that. If you see a commercial opportunity, please let them know so that this might be more and more realistic.

The documentation written above is just how I have setup the installation and I wanted to share it with you. I do not offer support on it, but ofcourse I am willing to answer questions you might have about the setup etc. I did not include the vhost configuration in it's entirely, if that is a popular request, I will add it to this post.

Open Questions to OX:

So as mentioned I have questioned OX for some choices:

  • Please use a more portable path for the Bash shell (#!/usr/bin/env bash)
  • Please allow the use of a different localbase (/usr/local/open-xchange for example)
  • Please allow FreeBSD packagers to fetch a "clean" .tar.gz, so that we can package this for OX and distribute it for our end-users.
  • Unrelated to the post above: Please allow the usage of external caldav/carddav providers

Edit:

I have found another thing that I needed to change. I needed to use gsed (Gnu-sed) instead of FreeBSD-sed so that the listuser scripts work. Linux does that a bit differently but if you replace sed with gsed those scripts will work fine.

I have not yet got some feedback from OX.

by Remko Lodder at August 29, 2017 07:48 AM

April 11, 2017

Eric Anholt

This week in vc4 (2017-04-10): dmabuf fencing, meson

The big project for the last two weeks has been developing dmabuf fencing support for vc4.  Without dmabuf fences, when passing buffers between devices the user needs to manually wait for the job to finish on one (say, camera snapshot) before letting the other device get started (accumulating GL commands to texture from the camera snapshot).  That means leaving both devices idle for a moment while the CPU accumulates the command stream for the consumer, but the bigger pain is that it requires that the end user manage the synchronization.

With dma-buf fencing in the kernel, a "reservation object" generated by the dma-buf exporter tracks the fences of the various devices using the shared object, and then the device trivers get to look at that list and wait on on each others' fences when using it.

So far, I've got my reservations and fences being exported from vc4, so that pl111 display can wait for vc4 to be done before actually putting a new pageflip up on the screen.  I haven't quite hooked up the other direction, for camera capture into vc4 display or GL texturing (I don't have a testcase for this, as the current camera driver doesn't expose dmabufs), but it shouldn't be hard.

On the meson front, rendercheck is now converted to meson upstream.  I've made more progress on the X Server:  Xorg is now building, and even successfully executes Xorg -pogo with the previous modesetting driver in place.  The new modesetting driver is failing mysteriously.  With a build hack I got from the meson folks and some work from ajax, the sdksyms script I complained about in my last post isn't used at all on the meson build.  And, best of all, the meson devs have written the code needed for us to not even need the build hack I'm using.

It's so nice to be using a build system that's an actual living software project.

by anholt at April 11, 2017 12:48 AM

March 27, 2017

Eric Anholt

This week in vc4 (2017-03-27): Upstream PRs, more CMA, meson

Last week I sent pull requests for bcm2835 changes for 4.12.  We've got some DT updates for HDMI audio, DSI, and SDHOST, and defconfig changes to enable SDHOST.  The DT changes to actually enable SDHOST (and get wifi working on Pi3) won't land until 4.13.

I also wrote a patch to enable using more than 256MB of CMA memory (and not require any particular alignment).  The 256MB limit was due to a hardware bug: the binner's memory allocations get dereferenced with their top 4 bits set to the top 4 bits of the tile state data array's address.  Given that tile state allocations happen after CL setup (while the binner is running and throwing overflow interrupts), there was no way to guarantee that we could find overflow memory with the top bits matching.

The new solution, suggested by someone from the set top box group, is to allocate a single 16MB to 32MB buffer at HW init time, and return all of those types of allocations out of it, since it turns out you don't need much to complete rendering of any given scene.  I've been mulling over the details of a solution for a while, and finally wrote and tested the patch I wanted (tricky parts included freeing the memory when the hardware was idle, and how to track the lifetimes of the sub-allocations).  Results look good, and I'll be submitting it this week.

However, I spent most of the week on converting the X Server over to meson

Meson is a delightful new build system (based around Ninja on Linux) that massively speeds up builds, while also being portable to Windows (unlike autotools generally).  If you've ever tried to build the X stack on Raspberry Pi, you know that autotools is painfully slow.  It's also been the limiting factor for me in debugging my scripts for CI for the X Server -- something we'd really like to be doing as we hack on glamor or do refactors in the core.

So far all I've landed in this project is code deletion, as I find build options that aren't hooked up to anything, or code that isn't hooked up to build options.  This itself will speed up our builds, and ajax has been working in parallel on deleting a bunch of code that makes the build messier than it needs to be.  I've also submitted patches for rendercheck converting to meson (as a demo of what the conversion looks like), and I have Xephyr, Xvfb, Xdmx, and Xwayland building in the X Server with meson.

So far the only stumbling block for the meson conversion of the X Server is the X.org sdksyms.c file.  It's the ugliest part of the build -- running the C preprocessor on a generated .c that #includes a bunch of .h tiles, then running the output of that through awk and trying to parse C using regular expressions.  This is, as you might guess, somewhat fragile.

My hope for a solution to this is to just quit generating sdksyms.c entirely.  Using ELF sections, we can convince the linker to not garbage collect symbols that it thinks are unused.  Then we get to just decorate symbols with XORG_EXPORT or XORG_EXPORT_VAR (unfortunately have to have separate sections for RO vs RW contents), and Xorg will have the correct set of symbols exported.  I started on a branch for this, ajax got it actually building, and now we just need to bash the prototypes so that the same set of symbols are exported before/after the conversion.

by anholt at March 27, 2017 10:43 PM

October 25, 2016

Murray Stokely

FreeBSD on Intel NUCs

I've been away from FreeBSD for a few years but I wanted some more functionality on my home network that I was able to configure with my Synology NAS and router. Specifically, I wanted:

  • a configurable caching name server that would serve up authoritative private names on my LAN and also validates responses with DNSSEC.
  • a more configurable DHCP server so I could make the server assign specific IPs to specific MAC addresses.
  • more compute power for transcoding videos for Plex.

Running FreeBSD 11 on an Intel NUC seemed like an ideal solution to keep my closet tidy. As of this week, $406.63 on Amazon buys a last generation i3 Intel NUC mini PC (NUC5I3RYH), with 8GB of RAM and 128GB of SSD storage. This was the first model I tried since I found reports of others using this with FreeBSD online, but I was also able to get it working on the newer generation i5 based NUC6i5SYK with 16GB of RAM and 256GB of SSD. The major issue with these NUCs is that the Intel wireless driver is not supported in FreeBSD. I am not doing anything graphical with these boxes so I don't know how well the graphics work, but they are great little network compute nodes.

Installation

I downloaded the FreeBSD 11 memory stick images, and was pleased to see that the device booted fine off the memory stick without any BIOS configuration required. However, my installation failed trying to mount root ("Mounting from ufs:/dev/ufs/FreeBSD_Install failed with error 19."). Installation from an external USB DVD drive and over the network with PXE both proved more successful at getting me into bsdinstaller to complete the installation.

I partitioned the 128GB SSD device with 8GB of swap and the rest for the root partition (UFS, Journaled and Soft Updates). After installation I edited /etc/fstab to add a tmpfs(5) mount for /tmp. The dmesg output for this host is available in a Gist on Github.

Warren Block's article on SSD on FreeBSD and the various chapters of the FreeBSD Handbook were helpful. There were a couple of tools that were also useful in probing the performance of the SSD with my FreeBSD workload:

  • The smartctl tool in the sysutils/smartmontools package allows one to read detailed diagnostic information from the SSD, including wear patterns.
  • The basic benchmark built into diskinfo -t reports that the SSD is transferring 503-510MB/second.
But how well does it perform in practice?

Rough Benchmarks

This post isn't meant to report a comprehensive suite of FreeBSD benchmarks, but I did run some basic tests to understand how suitable these low power NUCs perform in practice. To start with, I downloaded the 11-stable source from Subversion and measured the build times to understand performance of the new system. All builds were done with a minimal 2 line make.conf:


MALLOC_PRODUCTION=yes
CPUTYPE?=core2

Build Speed

Build CommandEnvironmentReal Times
make -j4 buildkernel/usr/src and /usr/obj on SSD10.06 minutes
make -j4 buildkernel/usr/src on SSD, /usr/obj on tmpfs9.65 minutes
make -j4 buildworld/usr/src and /usr/obj on SSD1.27 hours
make buildworld/usr/src and /urs/obj on SSD3.76 hours

Bonnie

In addition to the build times, I also wanted to look more directly at the performance reading from flash and reading from the NFS mounted home directories on my 4-drive NAS. I first tried Bonnie++, but then ran into a 13-year old bug in the NFS client of FreeBSD. After switching to Bonnie, I was able to gather some reasonable numbers. I had to use really large file sizes for the random write test to eliminate most of the caching that was artificially inflating the results. For those that haven't seen it, Brendan Gregg's excellent blog post highlights some of the issues of file system benchmarks like Bonnie.


Average of 3 bonnie runs with 40GB block size
ConfigurationRandom I/OBlock InputBlock Output
Seeks/SecCPU UtilizationReads/secCPU UtilizationWrites/secCPU Utilization
NFS99.20.91065054.8899667.5
SSD880913.553867125.3316091711.3

The block input rates from my bonnie benchmarks on the SSD were within 5% of the value provided by the much quick and dirtier diskinfo -t test.

Running Bonnie with less than 40GB file size yielded unreliable benchmarks due to caching at the VM layer. The following boxplot shows the random seek performance during 3 runs each at 24, 32, and 40GB file sizes. Performance starts to even off at this level but with smaller file sizes the reported random seek performance is much higher.

Open Issues

As mentioned earlier, I liked the performance I got with running FreeBSD on a 2015-era i3 NUC5I3RYH so much that I bought a newer, more powerful second device for my network. The 2016-era i5 NUC 6i5SYK is also running great. There are just a few minor issues I've encountered so far:

  • There is no FreeBSD driver for the Intel Wireless chip included with this NUC. Code for other platforms exists but has not been ported to FreeBSD.
  • The memory stick booting issue described in the installation section. It is not clear if it didn't like my USB stick for some reason, or the port I was plugging into, or if additional boot parameters would have solved the issue. Documentation and/or code needs to be updated to make this clearer.
  • Similarly, the PXE Install instructions were a bit scattered. The PXE section of the Handbook isn't specifically targetting new manual installations into bsdinstall. There are a few extra things you can run into that aren't documented well or could be streamlined.
  • Graphics / X11 are outside of the scope of my needs. The NUCs have VESA mounts so you can easily tuck them behind an LCD monitor, but it is not clear to me how well they perform in that role.

by Murray (noreply@blogger.com) at October 25, 2016 03:27 AM

April 07, 2016

FreeBSD Foundation

Introducing a New Website and Logo for the Foundation


The FreeBSD Foundation is pleased to announce the debut of our new logo and website, signaling the ongoing evolution of the Foundation identity, and ability to better serve the FreeBSD Project. Our new logo was designed to not only reflect the established and professional nature of our organization, but also to represent the link between the Project and the Foundation, and our commitment to community, collaboration, and the advancement of FreeBSD.

We did not make this decision lightly.  We are proud of the Beastie in the Business Suit and the history he encompasses. That is why you’ll still see him make an appearance on occasion. However, as the Foundation’s reach and objectives continue to expand, we must ensure our identity reflects who we are today, and where we are going in the future. From spotlighting companies who support and use FreeBSD, to making it easier to learn how to get involved, spread the word about, and work within the Project, the new site has been designed to better showcase, not only how we support the Project, but also the impact FreeBSD has on the world. The launch today marks the end of Phase I of our Website Development Project. Please stay tuned as we continue to add enhancements to the site.

We are also in the process of updating all our collateral, marketing literature, stationery, etc with the new logo. If you have used the FreeBSD Foundation logo in any of your marketing materials, please assist us in updating them. New Logo Guidelines will be available soon. In the meantime, if you are in the process of producing some new literature, and you would like to use the new Foundation logo, please contact our marketing department to get the new artwork.

Please note: we've moved the blog to the new site. See it here.





by Anne Dickison (noreply@blogger.com) at April 07, 2016 04:40 PM

February 26, 2016

FreeBSD Foundation

FreeBSD and ZFS

ZFS has been making headlines lately, so it seems like the right time to talk about the longstanding relationship between FreeBSD and ZFS.

For nearly seven years, FreeBSD has included a production quality ZFS implementation, making it one of the key features of the FreeBSD operating system. ZFS is a combined file system and volume manager. Decoupling physical media from logical volumes allows free space to be efficiently shared between all of the file systems. ZFS introduced unprecedented data integrity and reliability guarantees to storage on FreeBSD. ZFS supports varying levels of redundancy for tolerance of hardware failures and includes cryptographic checksums on all data to guard against corruption.

Allan Jude, VP of Operations at ScaleEngine and coauthor of FreeBSD Mastery: ZFS, said “We started using ZFS in 2011 because we needed to safely store a huge quantity of video for our customers. FreeBSD was, and still is, the best platform for deploying ZFS in production. We now store more than a petabyte of video using ZFS, and use ZFS Boot Environments on all of our servers.”

So why does FreeBSD include ZFS and contribute to its continued development? FreeBSD community members understand the need for continued development work as technologies evolve. OpenZFS is the truly open source successor to the ZFS project and the FreeBSD Project has participated in OpenZFS since its founding in 2013. FreeBSD developers and those from Delphix, Nexenta, Joyent, the ZFS on Linux project, and the Illumos project work together to continue improving OpenZFS.

FreeBSD’s unique open source infrastructure, copyfree license, and engaged community support the integration of a variety of free software components, including OpenZFS. FreeBSD makes an excellent operating system for servers and end users, and it provides a foundation for many open source projects and commercial products.

We're happy that ZFS is available in FreeBSD as a fully integrated, first class file system and wish to thank all of those who have contributed to it over the years.

by Anne Dickison (noreply@blogger.com) at February 26, 2016 03:23 PM

February 20, 2016

Joseph Koshy

ELF Toolchain v0.7.1

I am pleased to announce the availability of version 0.7.1 of the software being developed by the ElfToolChain project.

This release offers:
  • Better support of the DWARF4 format.
  • Support for more machine architectures.
  • Many bug fixes and improvements.
The release also contains experimental code for:
  • A library handling the Portable Executable (PE) format.
  • A link editor.
The release may be downloaded from SourceForge:
https://sourceforge.net/projects/elftoolchain/files/Sources/elftoolchain-0.7.1/
Detailed release notes are available at the URL mentioned above.

Many thanks to the project's supporters for their contributions to the project.

by Joseph Koshy (noreply@blogger.com) at February 20, 2016 12:06 PM

January 25, 2015

Giorgios Keramidas

Some Useful RCIRC Snippets

I have started using rcirc as my main IRC client for a while now, and I really like the simplicity of its configuration. All of my important IRC options now fit in a couple of screens of text.

All the rcirc configuration options are wrapped in an eval-after-load form, to make sure that rcirc settings are there when I need them, but they do not normally cause delays during the startup of all Emacs instances I may spawn:

(eval-after-load "rcirc"
  '(progn

     rcirc-setup-forms

     (message "rcirc has been configured.")))

The “rcirc-setup-forms” are then separated in three clearly separated sections:

  • Generic rcirc configuration
  • A hook for setting up nice defaults in rcirc buffers
  • Custom rcirc commands/aliases

Only the first set of options is really required. Rcirc can still function as an IRC client without the rest of them. The rest is there mostly for convenience, and to avoid typing the same setup commands more than once.

The generic options I have set locally are just a handful of settings to set my name and nickname, to enable logging, to let rcirc authenticate to NickServ, and to tweak a few UI details. All this fits nicely in 21 lines of elisp:

;; Identification for IRC server connections
(setq rcirc-default-user-name "keramida"
      rcirc-default-nick      "keramida"
      rcirc-default-full-name "Giorgos Keramidas")

;; Enable automatic authentication with rcirc-authinfo keys.
(setq rcirc-auto-authenticate-flag t)

;; Enable logging support by default.
(setq rcirc-log-flag      t
      rcirc-log-directory (expand-file-name "irclogs" (getenv "HOME")))

;; Passwords for auto-identifying to nickserv and bitlbee.
(setq rcirc-authinfo '(("freenode"  nickserv "keramida"   "********")
                       ("grnet"     nickserv "keramida"   "********")))

;; Some UI options which I like better than the defaults.
(rcirc-track-minor-mode 1)
(setq rcirc-prompt      "»» "
      rcirc-time-format "%H:%M "
      rcirc-fill-flag   nil)

The next section of my rcirc setup is a small hook function which tweaks rcirc settings separately for each buffer (both channel buffers and private-message buffers):

(defun keramida/rcirc-mode-setup ()
  "Sets things up for channel and query buffers spawned by rcirc."
  ;; rcirc-omit-mode always *toggles*, so we first 'disable' it
  ;; and then let the function toggle it *and* set things up.
  (setq rcirc-omit-mode nil)
  (rcirc-omit-mode)
  (set (make-local-variable 'scroll-conservatively) 8192))

(add-hook 'rcirc-mode-hook 'keramida/rcirc-mode-setup)

Finally, the largest section of them all contains definitions for some custom commands and short-hand aliases for stuff I use all the time. First come a few handy aliases for talking to ChanServ, NickServ and MemoServ. Instead of typing /quote nickserv help foo, it’s nice to be able to just type /ns help foo. This is exactly what the following three tiny forms enable, by letting rcirc know that “/cs”, “/ms” and “/ns” are valid commands and passing-along any arguments to the appropriate IRC command:

;;
;; Handy aliases for talking to ChanServ, MemoServ and NickServ.
;;

(defun-rcirc-command cs (arg)
  "Send a private message to the ChanServ service."
  (rcirc-send-string process (concat "CHANSERV " arg)))

(defun-rcirc-command ms (arg)
  "Send a private message to the MemoServ service."
  (rcirc-send-string process (concat "MEMOSERV " arg)))

(defun-rcirc-command ns (arg)
  "Send a private message to the NickServ service."
  (rcirc-send-string process (concat "NICKSERV " arg)))

Next comes a nifty little /join replacement which can join multiple channels at once, as long as their names are separated by spaces, commas or semicolons. To make its code more readable, it’s split into 3 little functions: rcirc-trim-string removes leading and trailing whitespace from a string, rcirc-normalize-channel-name prepends “#” to a string if it doesn’t have one already, and finally rcirc-cmd-j uses the first two functions to do the interesting bits:

(defun rcirc-trim-string (string)
  "Trim leading and trailing whitespace from a string."
  (replace-regexp-in-string "^[[:space:]]*\\|[[:space:]]*$" "" string))

(defun rcirc-normalize-channel-name (name)
  "Normalize an IRC channel name. Trim surrounding
whitespace, and if it doesn't start with a ?# character, prepend
one ourselves."
  (let ((trimmed (rcirc-trim-string name)))
    (if (= ?# (aref trimmed 0))
        trimmed
      (concat "#" trimmed))))

;; /j CHANNEL[{ ,;}CHANNEL{ ,;}CHANNEL] - join multiple channels at once
(defun-rcirc-command j (arg)
  "Short-hand for joining a channel by typing /J channel,channel2,channel,...

Spaces, commas and semicolons are treated as channel name
separators, so that all the following are equivalent commands at
the rcirc prompt:

    /j demo;foo;test
    /j demo,foo,test
    /j demo foo test"
  (let* ((channels (mapcar 'rcirc-normalize-channel-name
                           (split-string (rcirc-trim-string arg) " ,;"))))
    (rcirc-join-channels process channels)))

The last short-hand command lets me type /wii NICK to get “extended” whois information for a nickname, which usually includes idle times too:

;; /WII nickname -> /WHOIS nickname nickname
(defun-rcirc-command wii (arg)
  "Show extended WHOIS information for one or more nicknames."
  (dolist (nickname (split-string arg " ,"))
    (rcirc-send-string process (concat "WHOIS " nickname " " nickname))))

With that, my rcirc setup is complete (at least in the sense that I can use it to chat with my IRC friends). There are no fancy bells and whistles like DCC file transfers, or fancy color parsing, and similar things, but I don’t need all that. I just need a simple, fast, pretty IRC client, and that’s exactly what I have now.

by keramida at January 25, 2015 08:41 AM

January 07, 2015

Murray Stokely

AsiaBSDCon 2014 Videos Posted (6 years of BSDConferences on YouTube)

Sato-san has once created a playlist of videos from AsiaBSDCon. There were 20 videos from the conference held March 15-16, 2014 and papers can be found here. Congrats to the organizers for running another successful conference in Tokyo. A full list of videos is included below. Six years ago when I first created this channel videos longer than 10 minutes couldn't normally be uploaded to YouTube and we had to create a special partner channel for the content. It is great to see how the availability of technical video content about FreeBSD has grown in the last six years.

by Murray (noreply@blogger.com) at January 07, 2015 11:22 PM

December 26, 2013

Giorgios Keramidas

Profiling is Our Friend

I recently wrote a tiny demo program to demonstrate to co-workers how one can build a cache with age-based expiration of its entries, using purely immutable Scala collections. The core of the cache was something like 25-30 lines of Scala code like this:

class FooCache(maxAgeMillis: Long) {
  def now: Long = System.currentTimeMillis

  case class CacheEntry(number: Long, value: Long,
                        birthTime: Long) {
    def age: Long = now - birthTime
  }

  lazy val cache: AtomicReference[Hashmap[Long, CacheEntry]] =
    new AtomicReference(HashMap[Long, CacheEntry]]())

  def values: Hashmap[Long, CacheEntry] =
    cache.get.filter{ (key, entry) =>
      entry.age <= maxAgeMillis }

  def get(number: Long): Long = {
    values.find{ case (key, entry) =>
      key == number && entry.age <= maxAgeMillis
    } match {
      case Some((key, entry)) =>
        entry.value                // cache hit
      case _ =>
        val entry = CacheEntry(number, compute(number), now)
        cache.set(values + (number -> entry))
        entry.value
    }
  }

  def compute(number: Long): Long =
    { /* Some long-running computation based on 'number' */ }
}

The main idea here is that we keep an atomically updated reference to an immutable HashMap. Every time we look for entries in the HashMap we check if (entry.age <= maxAgeMillis), to skip over entries which are already too old to be of any use. Then on cache insertion time we go through the ‘values’ function which excludes all cache entries which have already expired.

Note how the cache itself is not ‘immutable’. We are just using an immutable HashMap collection to store it. This means that Scala can do all sorts of optimizations when multiple threads want to iterate through all the entries of the cache looking for something they want. But there’s an interesting performance bug in this code too…

It’s relatively easy to spot once you know what you are looking for, but did you already catch it? I didn’t. At least not the first time I wrote this code. But I did notice something was ‘odd’ when I started doing lookups from multiple threads and looked at the performance stats of the program in a profiler. YourKit showed the following for this version of the caching code:

JVM Profile #1

See how CPU usage hovers around 60% and we are doing a hefty bunch of garbage collections every second? The profiler quickly led me to line 17 of the code pasted above, where I am going through ‘values’ when looking up cache entries.

Almost 94% of the CPU time of the program was spent inside the .values() function. The profiling report included this part:

+-----------------------------------------------------------+--------|------+
|                           Name                            | Time   | Time |
|                                                           | (ms)   | (%)  |
+-----------------------------------------------------------+--------|------+
| demo.caching                                              | 62.084 | 99 % |
| +-- d.caching.Numbers.check(long)                         | 62.084 | 99 % |
|   +-- d.caching.FooCacheModule$FooCache.check(long)       | 62.084 | 99 % |
|     +---d.caching.FooCacheModule$FooCache.values()        | 58.740 | 94 % |
|     +---scala.collection.AbstractIterable.find(Function1) |  3.215 |  5 % |
+-----------------------------------------------------------+--------|------+

We are spending far too much time expiring cache entries. This is easy to understand why with a second look at the code of the get() function: every cache lookup does old entry expiration and then searches for a matching cache entry.

The way cache-entry expiration works with an immutable HashMap as the underlying cache entry store is that values() iterates over the entire cache HashMap, and builds a new HashMap containing only the cache entries which have not expired. This is bound to take a lot of procesisng power, and it’s also what’s causing the creation of all those ‘new’ objects we are garbage collecting every second!

Do we really need to construct a new cache HashMap every time we do a cache lookup? Of course not… We can just filter the entries while we are traversing the cache.

Changing line 17 from values.find{} to cache.get.find{} does not do cache-entry expiration at the time of every single lookup, and now our cache lookup speed is not limited by how fast we can construct new CacheEntry objects, link them to a HashMap and garbage-collect the old ones. Running the new code through YourKit once more showed an immensely better utilization profile for the 8 cores of my laptop’s CPU:

JVM Profile #2

Now we are not spending a bunch of time constructing throw-away objects, and garbage collector activity has dropped by a huge fraction. We can also make much more effective use of the available CPU cores for doing actual cache lookups, instead of busy work!

This was instantly reflected at the metrics I was collecting for the actual demo code. Before the change, the code was doing almost 6000 cache lookups per second:

-- Timers -------------------------------
caching.primes.hashmap.cache-check
             count = 4528121
         mean rate = 5872.91 calls/second
     1-minute rate = 5839.87 calls/second
     5-minute rate = 6053.27 calls/second
    15-minute rate = 6648.47 calls/second
               min = 0.29 milliseconds
               max = 10.25 milliseconds
              mean = 1.34 milliseconds
            stddev = 1.45 milliseconds
            median = 0.62 milliseconds
              75% <= 0.99 milliseconds
              95% <= 4.00 milliseconds
              98% <= 4.59 milliseconds
              99% <= 6.02 milliseconds
            99.9% <= 10.25 milliseconds

After the change to skip cache expiration at cache lookup, and only do cache entry expiration when we are inserting new cache entries, the same timer reported a hugely improved speed for cache lookups:

-- Timers -------------------------------
caching.primes.hashmap.cache-check
             count = 27500000
         mean rate = 261865.50 calls/second
     1-minute rate = 237073.52 calls/second
     5-minute rate = 186223.68 calls/second
    15-minute rate = 166706.39 calls/second
               min = 0.00 milliseconds
               max = 0.32 milliseconds
              mean = 0.02 milliseconds
            stddev = 0.02 milliseconds
            median = 0.02 milliseconds
              75% <= 0.03 milliseconds
              95% <= 0.05 milliseconds
              98% <= 0.05 milliseconds
              99% <= 0.05 milliseconds
            99.9% <= 0.32 milliseconds

That’s more like it. A cache lookup which completes in 0.32 milliseconds for the 99-th percentile of all cache lookups is something I definitely prefer working with. The insight from profiling tools, like YourKit, was instrumental in both understanding what the actual problem was, and verifying that the solution actually had the effect I expected it to have.

That’s why profiling is our friend!

by keramida at December 26, 2013 04:38 AM

September 25, 2012

Joseph Koshy

New release: ELF Toolchain v0.6.1

I am pleased to announce the availability of version 0.6.1 of the software being developed by the ElfToolChain project.

This new release supports additional operating systems (DragonFly BSD, Minix and OpenBSD), in addition to many bug fixes and documentation improvements.

This release also marks the start of a new "stable" branch, for the convenience of downstream projects interested in using our code.

Comments welcome.

by Joseph Koshy (noreply@blogger.com) at September 25, 2012 02:25 PM