Planet FreeBSD

April 30, 2019


April 28, 2019


CFT FreeBSD pkg base now available

FreeBSD is testing a a new approach to pkgbase. See the CFT FreeBSD pkg base message for additional details.

April 28, 2019 08:00 AM

February 16, 2019

Colin Percival

FreeBSD ZFS AMIs Now Available

Earlier today I sent an email to the freebsd-cloud mailing list:
Hi EC2 users,

FreeBSD 12.0-RELEASE is now available as AMIs with ZFS root disks in all 16
publicly available EC2 regions:

[List elided; see the GPG-signed email for AMI IDs]

The ZFS configuration (zpool named "zroot", mount point on /, /tmp,
/usr/{home,ports,src}, /var/{audit,crash,log,mail,tmp}) should match what
you get by installing FreeBSD 12.0-RELEASE from the published install media
and selecting the defaults for a ZFS install.  Other system configuration
matches the FreeBSD 12.0-RELEASE AMIs published by the release engineering

I had to make one substantive change to 12.0-RELEASE, namely merging r343918
(which teaches /etc/rc.d/growfs how to grow ZFS disks, matching the behaviour
of the UFS AMIs in supporting larger-than-default root disks); I've MFCed
this to stable/12 so it will be present in 12.1 and later releases.

If you find these AMIs useful, please let me know, and consider donating to
support my work on FreeBSD/EC2 (  If
there's enough interest I'll work with the release engineering team to add
ZFS AMIs to what they publish.

February 16, 2019 09:50 PM

February 11, 2019

Warner Losh

Strange Code

Now That's Weird

I was trying to compile some ancient code I pulled off the net. It is related to the Venix stuff I've been doing on and off of late.
put = bp->b_nleft;
if (put > cnt)
    put = cnt;
bp->b_nleft -= put;
to = bp->b_ptr;
asm("movc3 r8,(r11),(r7)");
bp->b_ptr += put;
p += put;
cnt -= put;
goto top;
So that's weird, right. What the heck is that movc3 doing in the middle of that code.

This code originally ran on BSD 4.1. The only system that version of Unix ran on was a VAX (later versions were more widely ported, but 4.1 was more of a limited distribution version). OK, looking up the movc3 instruction on vax references online, we see it is the "Move Character" instruction. r8 is the length. srcaddr is (r11) and dstaddr is (r7). So in effect, someone has done an inline of bcopy() here. Now, that's half of the problem. The other half is puzzling out what is in r7, r8 and r11 at the time of this call. In a perfect world, I'd just crank up the compiler to tell me. We live in an imperfect world where spinning up a 4.1 BSD system takes a substantial amount of time.

Fortunately, we can guess. cnt -= put gives us our first clue. We're decrementing by how much we copied, it seems. So r8 (the length) is put. OK. Now, we have this nice variable named 'to' that was most likely in the dstaddr (so r7), and we update it after (to appears only to be here for the side effect, so that's nice). But what's the from? The only thing it could logically be is 'p' since we += it by put as well.

So my best guess is that can be replaced by memcpy(to, p, put); and life will be good. My spidy sense also tells me that we don't need memmove here because they aren't overlapping ranges.

by Warner Losh ( at February 11, 2019 09:28 PM

January 27, 2019

Alexander Leidinger

Strategic thinking, or what I think what we need to do to keep FreeBSD relevant

Since I participate in the FreeBSD project there are from time to time some voices which say FreeBSD is dead, Linux is the way to go. Most of the time those voices are trolls, or people which do not really know what FreeBSD has to offer. Sometimes those voices wear blinders, they only see their own little world (were Linux just works fine) and do not see the big picture (like e.g. competition stimulates business, …) or even dare to look what FreeBSD has to offer.

Sometimes those voices raise a valid concern, and it is up to the FreeBSD project to filter out what would be beneficial. Recently there were some mails on the FreeBSD lists in the sense of “What about going into direction X?”. Some people just had the opinion that we should stay were we are. In my opinion this is similarly bad to blindly saying FreeBSD is dead and following the masses. It would mean stagnation. We should not hold people back in exploring new / different directions. Someone wants to write a kernel module in (a subset of) C++ or in Rust… well, go ahead, give it a try, we can put it into the Ports Collection and let people get experience with it.

This discussion on the mailinglists also triggered some kind of “where do we see us in the next years” / strategic thinking reflection. What I present here, is my very own opinion about things we in the FreeBSD project should look at, to stay relevant in the long term. To be able to put that into scope, I need to clarify what “relevant” means in this case.

FreeBSD is currently used by companies like Netflix, NetApp, Cisco, Juniper, and many others as a base for products or services. It is also used by end-users as a work-horse (e.g. mailservers, webservers, …). Staying relevant means in this context, to provide something which the user base is interested in to use and which makes it more easy / fast for the user base to deliver whatever they want or need to deliver than with another kind of system. And this in terms of time to market of a solution (time to deliver a service like a web-/mail-/whatever-server or product), and in terms of performance (which not only means speed, but also security and reliability and …) of the solution.

I have categorized the list of items I think are important into (new) code/features, docs, polishing and project infrastructure. Links in the following usually point to documentation/HOWTOs/experiences for/with FreeBSD, and not to the canonical entry points of the projects or technologies. In a few cases the links point to an explanation in the wikipedia or to the website of the topic in question.


The virtualization train (OpenStack, OpenNebula, oVirt, CloudStack, Kubernetes, Docker, Podman, …) is running on full speed. The marked is as big/important, that solution providers even do joint ventures on crossing borders between each others, e.g. VMware is opening up to integrate their solution with solutions from Amazon/Azure/Google. The underlying infrastructure is getting more and more unimportant, as long as the services which shall be run perform as needed. Ease of use and time to market are the key-drivers (the last little piece of performance is mostly important for companies which go to the “edge” (both meanings intended in a non-exclusive-or way) like Netflix for their FreeBSD based CDN). FreeBSD is not really participating in this world. Yes, we had jails way before anyone else out there had something smilar, and some even do not have that right now. But if you are realistic, FreeBSD does not play a major role here. You can do nice things with jails and bhyve, but you have to do it “by hand” (ezjail, iocage and such are improvements on the ease of use side, but that is not enough as this is still limited on a host centric view). The world has moved on to administering a datacenter (to avoid the buzzwords “cloud” or “private-cloud”) with a single-click. In my opinion we would need to port several of the initialy mentioned cloud/container management solutions to FreeBSD and have them able to handle their work via jails and/or bhyve. If FreeBSD is not able to serve as a building block in this big picture, we will fall off the edge in this particular IT-area in the long run.

With all the ready-made containers available in the internet, we should improve our linuxolator. Our kernel-support for this is limited to a 2.6.32-ish ABI version (it is less than 2.6.32, more like 2.6.16, we are missing epoll and inotify support, among others, but this is the lowest version glibc in the CentOS 7 based linux_base port is able to run on… and glibc checks the version number). We need to catch-up to a more recent version if we want to be able to run those ready-made linux containers without issue (we can put a linux system into a jail and start that). If someone would like to work on that, a good start would be to run the Linux Test Project tests via the linuxulator and start fixing bugs. The last time I did that was in 2007 and about 16% of the test cases failed back then. It would be also quite nice if we could integrate those linuxulator tests into the FreeBSD CI. With improvements in the linuxulator and the above mentioned virtualization support, we should be able to run more/those linux-images … ehrm, sorry, docker/kubernetes/…-containers within the linuxulator.

Finish the work regarding kerberos in base. Cy Schubert is/was working on this. I do not know the status of this, but the “fixing 800 ports” part in his mail from May 2018 looks like some more helping hands would be beneficial. This would bring us to a better starting point for a more seamless integration (some ports need “the other” kerberos).

We have one port (as far as I was able to determine… the exact amount does not really matter) in terms of SDN – net/openvswitch – but the documentation of this … leaves room for improvement (kernel support / netmap support and functionality / FreeBSD specific HOWTO). As part of the virtualisation of everything we (yes: we – as part of the FreeBSD handbook, see docs category below for more on this) need to provide this info so that FreeBSD is able to participate in this area. We should also have a look at porting some more SDN software, e.g OpenContrail now tungstenfabric (there is an old contrail porting project), OpenStack Neutron, OpenDaylight, … so that users have a choice, respectively FreeBSD can be integrated into existing heterogeneous environments. 

Sensors (temperature, voltage, fans, …), a topic with history. Short: in the Google Summer of Code 2007 code was produced, committed, and then removed again due to a dispute. My personal understanding (very simplified) is “remove everything because some of the data handled by this framework shall not be handled by this framework” (instead of e.g. “remove sensor X, this data shall not be handled in this way”), and “remove everything as this does not handle sensors of type X which are not used in servers but in enterprise class >99% non-IT-related sensors”. Nothing better has shown up since then. If I look at Window, VMware, Solaris and Linux, I can query sensors on my mainboard/chassis/disks/whatever (yes, I am mixing some apples with oranges here), plot them in monitoring systems, and get alarms. In FreeBSD we fail on this topic (actually multiple topics) which I consider to be something basic and mandatory. I do not suggest that we commit the Google Summer of Code 2007 code. I suggest to have a look at what makes sense to do here. Take the existing code and commit it, or improve on this code outside the tree and then commit it, or write something new. In the end it does not matter (for an user) which way it is handled, as long as we have something which users can use in the end. It surely makes sense to have an OS-provided framework of registering sensors in a central place (it would surely be nice if you could get the temp/fan values of your graphics card… ooops… sorry… AI/HPC accelerator together with other similar hardware data in your hardware).

To continue playing well (not only) in the high-availability area, we should also have a look at getting an implementation of MPTCP (Mutlipath TCP) into the tree. Apple (and others) is already using it since 2013 with good benefits (most probably not only for Siri users). There exists some code for FreeBSD, but it is far from usable and it does not look like there is progress since 2016. We say we have the power to serve, but with the cloudification of the recent years, all users expect that everything is always-on and never fails, and being able to provide the server side of this client-server related technology for those people which have such high demands is necessary to not fall behind (do not let us rest on our laurels).

SecureBoot needs also some helping hands. At some point operating systems which do not support it will not be considered by companies anymore.

Another item we should have a look at is to provide means to write kernel code in different languages. Not in the base system, but at least in ports. If someone wants to write a kernel module in  C++ or Rust, why not? It offers possibilities to explore new areas. There are even reports of experiences with different languages. It does not fit your needs? Well, ignore it and continue writing kernel code in C, but let other people which want to use a screwdriver instead of a hammer do what they want, they will either learn that they should have used a hammer, or can report about benefits about the screwdriver.


I think we can improve our end-user docs to the next level. The base system is already well covered (we can surely find some features which we could document), but an user does not use FreeBSD to use FreeBSD. An user surely has a goal in mind which requires to setup some kind of service (mail server, web server, display server (desktop system), …). While one could argue that it is the 3rd party project which needs to document how to run their software on FreeBSD, I think we need to do our share here too. There are a lot of HOWTOs for Linux, and then you have to find some tips and tricks to make something work (better) on FreeBSD. What I have in mind here is that we should document how to make FreeBSD participate in a Windows Active Directory environment, or in an LDAP environment (as a client), improve the Samba part with FreeBSD specific parts (like how to make Samba use ZFS snapshots for Windows Shadow Copies), configuration management tools and so on. I do not talk about providing in-depth docs about the 3rd party software, but little HOWTOs with FreeBSD specific parts / tips and tricks, and a reference to the 3rd party docs. People come to us for real-world needs and if we provide them with a head-start of the most common items (e.g. also covering nginx or whatever and not only apache httpd) and then guide them to further docs will improve the value of our handbook even more for end-users (specially for newcomers, but also for experienced FreeBSD users which out of a sudden now need to do something which they never did before…).

We should also review our docs. The handbook lists e.g. procmail (just an example…). With procmail not being mainained anymore since a long time and known vulnerabilities we should replace the info there with info about maildrop (or any suitable replacement). Careful review may also find similar items which need some care.

One more item I have in mind in terms of docs for user is the restructuring of some parts. Now the world is more thinking in terms of XaaS (“something as a service”) we should also have a “cloud” section (going beyond of what we have in terms of virtualization already) in out handbook. We can put there items like the existing description of virtualisation items, but also should put there new items like glusterfs or object storage or the hopefully upcoming possibility of how to setup OpenStack/kubernetes/… on FreeBSD. This goes into the same direction as the first docs-item in terms of provide more documentation how to achieve goals of our users. 

In my opinion we are also lacking on the developer-documentation side. Yes, we have man pages which describe the official API (in most cases). Where I see room for improvement is the source code documentation. Something like doxygen (or whatever the tool of the day is – which one does not really matter, any kind of extractable-from-source documentation is better than no extractable documentation) is already used in several places in our source (search for it via: egrep -R ‘\\(brief|file)’ /usr/src/) and we have already some infrastructure to extract and render (HTML / PDF) them. The more accessible / easy it is to start development in FreeBSD, the more attractive it will be (additional to the existing benefits) to people / companies to dive in. The best examples about documenting source code in our code I have found so far is the isci and ocs_fc device code.


Polishing something in the topic of staying relevant? Yes! It is the details which matter. If people have 2 options with roughly the same features (nothing missing what you need, same price), which one do they take, the one which has everything consistent and well integrated, or the one with some quirks you can circumvent with a little bit of work on their side?

We have some nice features, but we are not using it to the extend possible. One of the items which come to my mind is DTrace. The area which I think needs polishing is to add more probes, and to have some kind of probe-convention about common topics. For example I/O related naming convention (maybe area specific, like storage I/O and network I/O) and covering all drivers to comply. We should also look into making it more accessible by providing more easy interfaces (no matter if text based (thanks to Devin Teske for dwatch, more of this magic please…), web based, or whatever) to make it really easy (= start a command or click around and you get the result for a specific set of probes/conditions/…). Some examples are statemaps, flamegraphs and most prominently the Oracle/Sun ZFS Storage Analytics to give you an idea what is possible with DTrace and and how to make it accessible to people without knowledge about the kernel internals and programming.

Some polishing in the ports collection would be to revisit the defaults options for ports with options. The target here should be to have consistent default settings (e.g. server software should not depend upon X11 by default (directly or indirectly), most people should not need to build the port with non-default options). One could argue that it is the responsability of the maintainer of the port, and to some extend it is, but we do not have guidelines which help here. So a little team of people to review all ports (and modify them if necessary) and come up with guidelines and examples would be great.

Additionally we should come up with meta-ports for specific use case, e.g. webserver (different flavours… apache/nginx/…), database (different flavours, with some useful tools like mytop or mysqltuner or similar) and then even reference them in the handbook (this goes along with my suggestion above to document real-world use cases instead of “only” the OS itself).

Recently -current has seen some low level performance improvements (via ifuncs, …). We should continue this and even extend it to revise default settings / values (non auto-tuned and even auto-tuned ones). I think it would be beneficial in the long run if we target more current hardware (without losing the ability to run on older hardware) and for those values which can not be auto-tuned provide some way of down-tuning (e.g. in a failsafe-boot setting in the loader or in documented settings for rc.conf or wherever those defaults can be changed).

Project infrastructure

We have a CI (continuous integration) system, but it is not very prominently placed. Just recently it gained some more attention from the developer side and we even got the first status report about it (nice! visibility helps making it a part of the community effort). There is a FreeBSD-wiki page about the status and the future ideas, but it was not updated since several months. There is also a page which talks in more details about using it for performance testing, which is something people have talked about since years but never became available (and is not today).

I think we need to improve here. The goals I think which are important are to get various testing, sanitizing and fuzzing technologies integrated into our CI. On the config repository I have not found any integration of e.g. the corresponding clang technologies (fuzzing, ASAN, UBSAN, MSAN (still experimental, so maybe not to target before other mature technologies)) or any other such technology.

We should also make our CI more public/visible (build status linked somewhere on, nag people more about issues found by it, have some docs how to add new tests (maybe from ports), so that more people can help in extend what we automatically test (e.g. how could I integrate the LTP (Linux Test Project) tests to test our linuxulator? This requires the download of a linux dist port, the LTP itself, and then to run the tests). There are a lot of nice ideas floating around, but I have the impression we are lacking some helping hands to get various items integrated.


Various items I talked above are not sexy. Those are typically not the things people do just for fun. Those are typically items people get paid for. It would be nice if some of the companies which benefit from FreeBSD would be so nice to lend a helping hand for one or another item. Maybe the FreeBSD Foundation has some contacts they could ask about this?

It could also be that for some of the items I mentioned here there is more ongoing that I know of. This means then that the corresponding work could be made more known on the mailinglists. When it is more known, maybe someone wants to provide a helping hand.

Send to Kindle


by netchild at January 27, 2019 09:18 PM

December 26, 2018

Colin Percival

The many ways to launch FreeBSD in EC2

Talking to FreeBSD users recently, I became aware that while I've created a lot of tools, I haven't done a very good job of explaining how, and more importantly when to use them. So for all of the EC2-curious FreeBSD users out there: Here are the many ways to launch and configure FreeBSD in EC2 — ranging from the simplest to the most complicated (but most powerful):

Launch FreeBSD and SSH in

This is the most straightforward way to get started with FreeBSD: Using either the EC2 API (most easily accessed via the awscli package) or the AWS Console, spin up a "stock FreeBSD" AMI. You have a few choices to make here:
  1. Whether to use the Community or Marketplace AMIs.
  2. Which EC2 Region you want to launch your instance into.
  3. Which of many EC2 instance types you want to use.
  4. How large you want the root disk to be (the root filesystem will be resized automatically if you select a size larger than the default 10 GB) and whether to attach additional disks.
For a "hobbyist" the easiest answers will probably be to use the Marketplace AMIs, in the EC2 region which is geographically closest to you, with a t2.micro and a single 10 GB disk (the default).

December 26, 2018 07:05 AM

December 17, 2018

Warner Losh

Adding additional revisions

Adding other directories

Sometimes you need to add commits from other places / directories to a repo you've slimmed down. This post uses the timed example to offer some advice.


Fortunately for us, the SMM.doc directory was moved late in the game. As such, it was easy to edit the commit stream to remove that commit, and then replay all the commits that came after it. Fortunately, there was only one to remove the 3. clause (advertising clause). That was done by hand, committed and the original commit message pasted into the log. I then used git rebase to order this commit in the right place temporally.


For this directory, I followed a different path. After looking at this file (or should I say how it is currently called libexec/rc.d/timed), I determined there were only a few real commits. Since there were only 10 commits, I just created a dumb script to run in the FreeBSD root of a github mirror repo:

for i in $(grep ^commit /tmp/3 | awk '{print $2;}' | tail -r); do
        git show $i etc/rc.d/timed | sed -e s=/etc/rc.d=/rc.d=g > $d/$(printf %04d $j)
        j=$(($j + 1))
Where /tmp/3 had 'git log etc/rc.d/timed' filtered to remove all the bogus commits (eg the merge ones).

Once I had these in place, I was able to then import them into my repo by cd'ing to the root and running
git am --patch-format=stgit /tmp/timed-junk/*
I oopsed and let a merge commit sneak through, and if you do that too, you can just delete the file in /tmp/timed-junk. Also, don't know why it didn't autodetect the format, but with an explicit format it just worked.

This produced 9 commits that resulted in the same timed file as was in svn. I cheated a little and omitted the movement commits, and since this is in git, $FreeBSD$ isn't expanded. This time, I didn't bother to sort them into the stream chronologically since I have no automation to do that and 9 commits by hand was more than I had time for.

Push the result

Since I rebased, I had to do a forced push. Should someone come along and want to make this a port, I'll do the sorting of commits then and do another forced push then publish the final results under FreeBSD's github account rather than my own personal one.

by Warner Losh ( at December 17, 2018 11:49 PM

November 14, 2018


Inky pHat on FreeBSD/Pi

About a month ago I purchased Inky pHat from Pimoroni, Pi hat with 220×104 red and black eInk screen. The device has an SPI interface with three additional GPIO signals: reset pin, command/data pin, and busy pin. Reset and busy pins are self-explanatory: the former resets device MCU the latter signals to the Pi whether the MCU is busy handling previous command/data. Command/data signals the type of SPI transaction that is about to be sent to Inky: low means command, high – data. It more or less matches interface to SSD1306 OLED display I played with before.

There is no datasheet or protocol description, so I used Pimoroni’s python library as a reference.

To communicate with the device over SPI, you need to apply spigen device-tree overlay and load spigen driver. For Raspberry Pi 3 you probably need a patch from this review applied. To load overlay add respective dtbo name to the fdt_overlays variable in /boot/loader.conf, e.g.:


I didn’t have any practical purpose for the device in mind so after several failed attempts to output RGB images in two color I ended up writing random ornament generator:



by gonzo at November 14, 2018 06:21 PM

October 22, 2018

Dag-Erling Smørgrav

DNS over TLS in FreeBSD 12

With the arrival of OpenSSL 1.1.1, an upgraded Unbound, and some changes to the setup and init scripts, FreeBSD 12.0, currently in beta, now supports DNS over TLS out of the box.

DNS over TLS is just what it sounds like: DNS over TCP, but wrapped in a TLS session. It encrypts your requests and the server’s replies, and optionally allows you to verify the identity of the server. The advantages are protection against eavesdropping and manipulation of your DNS traffic; the drawbacks are a slight performance degradation and potential firewall traversal issues, as it runs over a non-standard port (TCP port 853) which may be blocked on some networks. Let’s take a look at how to set it up.

Basic setup

As a simple test case, let’s set up our 12.0-ALPHA10 VM to use Cloudflare’s DNS service:

# uname -r
# cat >/etc/rc.conf.d/local_unbound <<EOF
# service local_unbound start
Performing initial setup.
/var/unbound/forward.conf created
/var/unbound/lan-zones.conf created
/var/unbound/control.conf created
/var/unbound/unbound.conf created
/etc/resolvconf.conf not modified
Original /etc/resolv.conf saved as /var/backups/resolv.conf.20181021.192629
Starting local_unbound.
Waiting for nameserver to start... good
# host is an alias for has address has IPv6 address 2610:1c1:1:606c::50:15 mail is handled by 0 .

Note that this is not a configuration you want to run in production—we will come back to this later.


The downside of DNS over TLS is the performance hit of the TCP and TLS session setup and teardown. We demonstrate this by flushing our cache and (rather crudely) measuring a cache miss and a cache hit:

# local-unbound-control reload
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.553 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.005 total

Compare this to querying our router, a puny Soekris net5501 running Unbound 1.8.1 on FreeBSD 11.1-RELEASE:

# time host gw >x
host gw > x 0.00s user 0.00s system 0% cpu 0.232 total
# time host >x
host gw > x 0.00s user 0.00s system 0% cpu 0.008 total

or to querying Cloudflare directly over UDP:

# time host >x      
host > x 0.00s user 0.00s system 0% cpu 0.272 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.013 total

(Cloudflare uses anycast routing, so it is not so unreasonable to see a cache miss during off-peak hours.)

This clearly shows the advantage of running a local caching resolver—it absorbs the cost of DNSSEC and TLS. And speaking of DNSSEC, we can separate that cost from that of TLS by reconfiguring our server without the latter:

# cat >/etc/rc.conf.d/local_unbound <<EOF
# service local_unbound setup
Performing initial setup.
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.205328
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
Original /var/unbound/unbound.conf saved as /var/backups/unbound.conf.20181021.205328
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound start
Starting local_unbound.
Waiting for nameserver to start... good
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.080 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.004 total

So does TLS add nearly half a second to every cache miss? Not quite, fortunately—in our previous tests, our first query was not only a cache miss but also the first query after a restart or a cache flush, resulting in a complete load and validation of the entire path from the name we queried to the root. The difference between a first and second cache miss is quite noticeable:

# time host >x 
host > x 0.00s user 0.00s system 0% cpu 0.546 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.004 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.168 total
# time host >x
host > x 0.00s user 0.00s system 0% cpu 0.004 total

Revisiting our configuration

Remember when I said that you shouldn’t run the sample configuration in production, and that I’d get back to it later? This is later.

The problem with our first configuration is that while it encrypts our DNS traffic, it does not verify the identity of the server. Our ISP could be routing all traffic to to its own servers, logging it, and selling the information to the highest bidder. We need to tell Unbound to validate the server certificate, but there’s a catch: Unbound only knows the IP addresses of its forwarders, not their names. We have to provide it with names that will match the x509 certificates used by the servers we want to use. Let’s double-check the certificate:

# :| openssl s_client -connect |& openssl x509 -noout -text |& grep DNS
DNS:*, IP Address:, IP Address:,, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:1001

This matches Cloudflare’s documentation, so let’s update our configuration:

# cat >/etc/rc.conf.d/local_unbound <<EOF
# service local_unbound setup
Performing initial setup.
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.212519
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Starting local_unbound.
Waiting for nameserver to start... good
# host is an alias for has address has IPv6 address 2610:1c1:1:606c::50:15 mail is handled by 0 .

How can we confirm that Unbound actually validates the certificate? Well, we can run Unbound in debug mode (/usr/sbin/unbound -dd -vvv) and read the debugging output… or we can confirm that it fails when given a name that does not match the certificate:

# perl -p -i -e 's/cloudflare/cloudfire/g' /etc/rc.conf.d/local_unbound
# service local_unbound setup
Performing initial setup.
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.215808
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Waiting for PIDS: 33977.
Starting local_unbound.
Waiting for nameserver to start... good
# host
Host not found: 2(SERVFAIL)

But is this really a failure to validate the certificate? Actually, no. When provided with a server name, Unbound will pass it to the server during the TLS handshake, and the server will reject the handshake if that name does not match any of its certificates. To truly verify that Unbound validates the server certificate, we have to confirm that it fails when it cannot do so. For instance, we can remove the root certificate used to sign the DNS server’s certificate from the test system’s trust store. Note that we cannot simply remove the trust store entirely, as Unbound will refuse to start if the trust store is missing or empty.

While we’re talking about trust stores, I should point out that you currently must have ca_root_nss installed for DNS over TLS to work. However, 12.0-RELEASE will ship with a pre-installed copy.


We’ve seen how to set up Unbound—specifically, the local_unbound service in FreeBSD 12.0—to use DNS over TLS instead of plain UDP or TCP, using Cloudflare’s public DNS service as an example. We’ve looked at the performance impact, and at how to ensure (and verify) that Unbound validates the server certificate to prevent man-in-the-middle attacks.

The question that remains is whether it is all worth it. There is undeniably a performance hit, though this may improve with TLS 1.3. More importantly, there are currently very few DNS-over-TLS providers—only one, really, since Quad9 filter their responses—and you have to weigh the advantage of encrypting your DNS traffic against the disadvantage of sending it all to a single organization. I can’t answer that question for you, but I can tell you that the parameters are evolving quickly, and if your answer is negative today, it may not remain so for long. More providers will appear. Performance will improve with TLS 1.3 and QUIC. Within a year or two, running DNS over TLS may very well become the rule rather than the experimental exception.

by Dag-Erling Smørgrav at October 22, 2018 09:36 AM

August 13, 2018

Alexander Leidinger

Essen Hackathon 2018

Again this time of the year where we had the pleasure of doing the Essen Hackathon in a nice weather condition (sunny, not too hot, no rain). A lot of people here, about 20. Not only FreeBSD committers showed up, but also contributors (biggest group was 3 people who work on iocage/libiocage, and some individuals with interest in various topics like e.g. SCTP / network protocols, and other topics I unfortunately forgot).

The topics of interest this year:

  • workflows / processes
  • Wiki
  • jail- / container management (pkgbase, iocage, docker)
  • ZFS
  • graphics
  • documentation
  • bug squashing
  • CA trust store for the base system

I was first working with Allan on moving forward with a CA trust store for the base system (target: make fetch work out of the box for TLS connections – currently you will get an error that the certificate can not validated, if you do not have the ca_nss_root port (or any other source of trust) installed and a symlink in base to the PEM file). We have investigated how base-openssl, ports-openssl and libressl are setup (ports-openssl is the odd one in the list, it looks in LOCALBASE/openssl for his default trust store, while we would have expected it would have a look in LOCALBASE/etc/ssl). As no ports-based ssl lib is looking into /etc/ssl, we were safe to do whatever we want in base without breaking the behavior of ports which depend upon the ports-based ssl libs. With that the current design is to import a set of CAs into SVN – one cert file per CA – and a way to update them (for the security officer and for users), blacklist CAs, and have base-system and local CAs merged into the base-config. The expectation is that Allan will be able to present at least a prototype at EuroBDCon.

I also had a look with the iocage/libiocage developers at some issues I have with iocage. The nice thing is, the current version of libiocage already solves the issue I see (I just have to change my processes a little bit). Some more cleanup is needed on their side until they are ready for a port of libiocage. I am looking forward to this.

Additionally I got some time to look at the list of PRs with patches I wanted to look at. Out of the 17 PRs I toke note of, I have closed 4 (one because it was overcome by events). One is in progress (committed to -current, but I want to MFC that). One additional one (from the iocage guys) I forwarded to jamie@ for review. I also noticed that Kristof fixed some bugs too.

On the social side we had discussions during BBQ, pizza/pasta/…, and a restaurant visit. As always Kristof was telling some funny stories (or at least telling stories in a funny way… 😉 ). This off course triggered some other funny stories from other people. All in all my bottom line of this years Essen Hackathon is (as for the other 2 I visited): fun, sun and progress for FreeBSD.

By bringing cake every time I went there, it seems that I created a tradition of this. So anyone should already plan to register for the next one – if nothing bad happens, I will bring cake again.

Send to Kindle


by netchild at August 13, 2018 07:46 PM

August 04, 2018

Adrian Chadd

Aligning a TS-430S, or "wait, how am I supposed to check FM again?"

I'm fixing another (I know I know) TS-430S for a friend. Yes, this means I'm returning it back to them. After all of the repairs I had to do to get the thing up and going reliably I did an RX carrier calibration. It was a little bit off - a combination of using WWV at 10MHz and the scope to calibrate CW, USB and SSB.

However, the AM and FM carriers didn't at all meet the expectations of the service manual. Notably, the AM carrier is seemingly the same as the USB carrier on transmit and there isn't one on receive. The FM carrier just didn't appear during receive or transmit. But .. it's transmitting FM.

Now, I need to go get the TS-430Ses I've fixed and compare the carrier behaviour to the other rigs, but .. well, they work on AM/FM receive and transmit. So ok, let's figure it out.

The AM carrier matches the USB carrier. It's weird because the circuit has an AM/FM carrier crystal however.. yeah, AM carrier here is linked to the USB carrier. I need to figure that out. And the FM transmit has no power control - it's 100W carrier only. So the only way to do it without dumping 100W out into the finals whilst adjusting it is to remove the RF drive output on the RF board (which feeds the finals with RF), attach a 50 ohm resistor across it and check the final RF carrier signal on the scope. This worked mostly OK but since there's no ALC feedback, the output is .. very distorted. Now, I don't know if these rigs were supposed to output a clean sine wave at all carrier output settings but .. well, they're very loud signals on lower bands, sometimes more than 8V peak-to-peak, which is almost triple what you need to feed the finals to get 100W out. So I got it in the ballpark - because well, the thing is not outputting a true sine wave here because the carrier output is way too high - and then had to resort to checking using a directional coupler and the scope.

Now, this isn't too bad - I was in the rough right spot for the FM carrier frequency anyway, and I can key down for a few seconds at a time without making things sad. But, this step was delayed until I verified the finals were working and that took a lot of work to get right. It turns out it was on the nose anyway after all of that and FM modulation now works great.

So - if you're aligning a TS-430S, the AM/FM carrier bit in the service manual may not be entirely correct.

by Adrian ( at August 04, 2018 11:41 PM

July 15, 2018

Adrian Chadd

Restoring a TS-430S, or "dry joints and stray RF: a tutorial"

I recently acquired a TS-430S HF transceiver. The seller claimed the FM board and full complement of filters worked, but no display, buttons/LEDs or sound. He said it worked until he sent it in to have filters added. I figured it was going to be something simple. Boy was I both right and wrong.

These rigs have a habit of dry joints everywhere. So, I powered it up to see - yes, no display. Ok - step 1 - check power rails. I discovered there was no 5v line. The IF board has the 7805 regulator, so it is time to check for dry joints.

Oh look! Some very dry joints. I bet these were marginal until the tech installing the filters jostled it about. I fixed these any anything else I could find on the IF board.

I then powered it up. One digit showed up - the optional 10Hz digit - but all the lights and buttons worked.

Now, this rig has a separate PLL board for the main VFO which exports a signal that blanks the VFO output and the display. Amusingly it doesn't blank the final digit though. Ok, so it's likely PLL unlock. The PLL board was getting power, but ... no stable 36MHz base oscillator. That's on the control board. I pulled that out to find more dry joints around that circuit and its connector - so, fixed that.

I fired it up again. The PLL board was still unlocked even though the 36MHz oscillator was now working. I spun the dial and measured the other VFO feeding the PLL board - this is the fine grain frequency selection that gets mixed in to the PLL boards four VFOs to output the final VFO signal. It was moving OK - so the control board and the other PLLs were OK. Next - check the four VCO selection lines - nothing.

The PLL board has four varicap diode based VCOs and a PLL loop. The control board outputs the band select data to the RF board which decodes it and drives the PLL VCO, the relay based LPFs and the receiver HPFs. There were multiple issues - the control board bandpass lines were wrong and the VCO select lines were wrong.

Next - the RF board. Dry joints everywhere. Here is one of many that linked ground planes together.

And this one was on the VCO output connector.

I removed the TTL IC that did the BCD to output line demuxing because it was dead and fixed the dry joints. But the control board was still outputting the wrong band info. It turns out the IO expander IC that drives those four lines had two dead IO lines. So, that needed replacing too.

At this stage the control board was OK, the band select lines and VCO select lines are OK, but no PLL lock. Time to diagnose the PLL board.

First up - the varicap VCO was working. Wrong frequency but working. The circuit takes the output of that, buffers it though a transistor amplifier, shapes it into a square wave and divides it down via a pair of TTL chips and feeds it into the PLL control IC.

Next - the 5v line on the PLL board was ... suspiciously low. 5v was coming in OK, but something was dragging it down to 3.8v in places. That is too low for TTL. I checked each chip and... the 75S112N flip flop chip was running hot. Ok, so that needed replacing. Note it is S and not 74LS - the PLL loop runs from 45 to 75MHz, so it needs speed. With that chip replaced the 5v rail was again at 5v. But, no PLL lock.

So I then traced the PLL loop. VCO was OK. VCO though the buffer amp wasn't. I pulled out the transistor there and it was open circuit. I didn't have an equivalent so I found a close enough one for now and ordered a replacement. But then it was sill not working right - the signal level into the TTL NAND chip was super low. I figured either the transistor I replaced it with wasn't biased right or the TTL chip was pulling its input low. Indeed it was the latter - the input side was shorted to ground. I replaced that chip and the rig sprung to life!

I recalibrated the four VCOs now that I had replaced some parts. It was locking OK on all bands.

But - the receive signal was low. I checked the attenuator switch - no go. I disconnected the attenuator control cable to the RF board - RX sprung to life! A little solder reflow on the switch board and that fixed that.

After that I just did the obligatory filter and finals board check and reflow.

One LPF relay clean procedure and finals alignment later and it's all ready to go. The SWR foldback protection needs fixing and I need a 150 ohm dry load to do that, so that's my next week project.

As to how those parts all failed, likely at once? My guess is stray RF fried a path somehow. I'm glad this was the extent of the part damage!

by Adrian ( at July 15, 2018 09:38 PM

April 09, 2018

Dag-Erling Smørgrav

Twenty years

Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today.

My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD.

My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year.

In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices.

Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends.

For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it.

I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon.


by Dag-Erling Smørgrav at April 09, 2018 08:35 PM

February 05, 2018

Remko Lodder

Reponse zones in BIND (RPZ/Blocking unwanted traffic).

A while ago, my dear colleague Mattijs came with an interesting option in BIND. Response zones. One can create custom "zones" and enforce a policy on that.

I never worked with it before, so I had no clue at all what to expect from it. Mattijs told me how to configure it (see below for an example) and offered to slave his RPZ policy-domains.

All of a sudden I was no longer getting a lot of ADS/SPAM and other things. It was filtered. Wow!

His RPZ zones were custom made and based on PiHole, where PiHole adds hosts to the local "hosts" file and sends it to (your local machine), which prevents it to reach the actual server at all, RPZ policies are much stronger and more dynamic.

RPZ policies offer the use of "redirecting" queries. What do I mean with that? well you can force a ADVERTISEMENT (AD for short) site / domain to the RPZ policy and return a NXDOMAIN. It no longer exists for the end-user. But you can also CNAME it to a domain/host you own and then add a webserver to that host and tell the user query'ing the page: "The site you are trying to reach had been pro-actively blocked by the DNS software. This is an automated action and an automated response. If you feel that this is not appropriate, please let us know on <mail link>", or something like that.

Once I noticed that and saw the value, I immediately saw the benefit for companies and most likely schools and home people. Mattijs had a busy time at work and I was recovering from health issues, so I had "plenty" of time to investigate and read on this. The RPZ policies where not updated a lot and caused some problems for my ereaders for example ( was used by them, see another post on this website for being grumpy about that). And I wanted to learn more about it. So what did I do?

Yes, I wrote my own parser. In perl. I wrote a "rpz-generator" (its actually called like that). I added the sources Mattijs used and generated my own files. They are rather huge, since I blocked ads, malware, fraud, exploits, windows stuff and various other things (gambling, fakenews, and stuff like that).

I also included some whitelists, because msfctinc was added to the lists and it made my ereaders go beserk, and we play a few games here and there which uses some advertisement sites, so we wanted to exempt them as well. It's better to know which ones they are and selectively allow them, then having traffic to every data collector out there.

This works rather well. I do not get a lot of complaints that things are not working. I do see a lot of queries going to "banned" sites everyday. So it is doing something .The most obvious one is that search results on google, not always are clickable. The ones that have those [ADV] sites, are blocked because they are advertising google sponsored sites, and they are on the list.. and google-analytics etc. It doesn't cause much harm to our internet surfing or use experience, with the exception of the ADV sites I just mentioned. My wife sometimes wants to click on those because she searches for something that happends to be on that list, but apart from that we are doing just fine.

One thing though, I wrote my setup and this article with my setup using "NXDOMAIN" which just gives back "site does not exist" messages. I want to make my script more smart by making it a selectable, so that some categories are CNAMED to a filtering domain and webpage, and some are NXDOMAIN'ed. If someone has experience with that, please show me some idea's and how that looks like and whether your end-users can do something with it or not. I think schools will be happy to present a block-page instead of NXdomain'ing some sites 🙂

Acknowledgements: Mattijs for teaching and showing me RPZ, ISC for placing RPZ in NAMED, and for having such excellent documentation to RPZ. The perl developers for having such a great tool around, and the various sites I use to get the blocklists from. Thank you all!

If you want to know more about the tool, please contact me and we can share whatever information is available 🙂

by Remko Lodder at February 05, 2018 11:09 PM

November 25, 2017

Erwin Lansing

120. Red ale

5,5kg Maris Otter
500g Crystal 60L
450g Munich I
100g Chocolate malt

Mash for 75 minutes at 65°C

30g Cascade @ 60 min.
30g Centennial @60 min.
30g Cascade @ 10 min.
30g Centennial @ 10 min.

Bottled at January 7, 2018 with 150g table sugar

White Labs WLP001 California ale yeast
OG: 1.052
FG: 1.006
ABV: 6,0%

The post 120. Red ale appeared first on Droso.

by erwin at November 25, 2017 05:34 PM

September 10, 2017

Erwin Lansing

119. Dunkel Weizen

2,5kg Munich II
2,5kg Dark wheat malt
0,2kg Special B
0,10kg Carafa II

Mash for 60 min at 65°C

30g Hallertauer (4%) for 90 min.

Bottled on November 5, with 120g table sugar

White Labs WLP300 Hefeweizen
OG: 1.055
FG: 1.012
ABV: 5,6%

The post 119. Dunkel Weizen appeared first on Droso.

by erwin at September 10, 2017 01:28 PM

August 29, 2017

Remko Lodder

FreeBSD: Using Open-Xchange on FreeBSD

If you go looking for a usable webmail application, then you might end up with Open-Xchange (OX for short). Some larger ISP's are using OX as their webmail application for customers. It has a multitude of options available, using multiple email accounts, caldav/carddav included (not externally (yet?)) etc. There are commercial options available for these ISP's, but also for smaller resellers etc.

But, there is also the community edition available. Which is the installation you can run for free on your machine(s). It does not have some of the fancy modules that large setups need and require, and some updates might follow a bit later which are more directly delivered to paying customers, but it is very complete and usable.

I decided to setup this for my private clients who like to use a webmail client to access their email. At first I ran this on a VM using Bhyve on FreeBSD. The VM ran on CentOS6 and had the necessary bits installed for the OX setup (see: I modified the files I needed to change to get this going, and there, it just worked. But, running on a VM, with ofcourse limited CPU and Memory power assigned (There is always a cap) and it being emulated, I was not very happy with it. I needed to maintain an additional installation and update it, while I have this perfectly fine FreeBSD server instead. (Note that I am not against using bhyve at all, it works very well, but I wanted to reduce my maintenance base a bit :-)).

So a few days ago I considered just moving the stuff over to the FreeBSD host instead. And actually it was rather trivial to do with the working setup on CentOS.

At this moment I do not see an easy way to get the source/components directly from within FreeBSD. I have asked OX for help on this, so that we can perhaps get this sorted out and perhaps even make a Port/pkg out of this for use with FreeBSD.

The required host changes and software installation

The first thing that I did was to create a zfs dataset for /opt. The software is normally installed there, and in this case I wanted to have a contained location which I can snapshot, delete, etc, without affecting much of the normal system. I copied over the /opt/open-xchange directory from my CentOS installation. I looked at the installation on CentOS and noticed that it used a specific user 'open-xchange', which I created on my FreeBSD host. I changed the files to be owned by this user. Getting a process listing on the CentOS machine also revealed that it needed Java/JDK. So I installed the openjdk8 pkg (''pkg install openjdk8''). The setup did not yet start, there were errors about /bin/bash missing. Obviously that required installing bash (''pkg install bash'') and you can go with two ways, you can alter every shebang (#!) to match /usr/local/bin/bash (or better yet #!/usr/bin/env bash), or you can symlink /usr/local/bin/bash to /bin/bash, which is what I did (I asked OX to make it more portable by using the env variant instead).

The /var/log/open-xchange directory does not normally exist, so I created that and made sure that ''open-xchange'' could write to that. (mkdir /var/log/open-xchange && chown open-xchange /var/log/open-xchange).

I was able to startup the /opt/open-xchange/sbin/open-xchange process with that. I could not yet easily reach it, on the CentOS installation there are two files in the Apache configuration that needed some attention on my FreeBSD host. The Apache include files: ox.conf and proxy_http.conf will give away hints about what to change. In my case I needed to do the redirect on the Vhost that runs OX (RedirectMatch ^/$ /appsuite/) and make sure the /var/www/html/appsuite directory is copied over from the CentOS installation as well. You can stick it in any location, as long as you can reach it with your webuser and Alias it to the proper directory and setup directory access).

Apache configuration (Reverse proxy mode)

The proxy_http.conf file is more interesting, it includes the reverse proxy settings to be able to connect to the java instance of OX and service your clients. I needed to add a few modules in Apache so that it could work, I already had several proxy modules enabled for different reasons, so the list below can probably be trimmed a bit to the exact modules needed, but since this works for me, I might as well just show you;

LoadModule slotmem_shm_module libexec/apache24/
LoadModule deflate_module libexec/apache24/
LoadModule expires_module libexec/apache24/
LoadModule proxy_module libexec/apache24/
LoadModule proxy_connect_module libexec/apache24/
LoadModule proxy_http_module libexec/apache24/
LoadModule proxy_scgi_module libexec/apache24/
LoadModule proxy_wstunnel_module libexec/apache24/
LoadModule proxy_ajp_module libexec/apache24/
LoadModule proxy_balancer_module libexec/apache24/
LoadModule lbmethod_byrequests_module libexec/apache24/
LoadModule lbmethod_bytraffic_module libexec/apache24/
LoadModule lbmethod_bybusyness_module libexec/apache24/

After that it was running fine for me. My users can login to the application and the local directory's are being used instead of the VM which ran it first. If you notice previous documentation on this subject, you will notice that there are more third party packages needed at that time. It could easily be that there are more modules needed than that I wrote about. My setup was not clean, the host already runs several websites (one of them being this one) and ofcourse support packages were already installed.

Updating is currently NOT possible. The CentOS installation requires running ''yum update'' periodically, but that is obviously not possible on FreeBSD. The packages used within CentOS are not directly usable for FreeBSD. I have asked OX to provide the various Community base and optional modules as .tar.gz files (raw) so that we can fetch them and install them on the proper location(s). As long as the .js/.jar files etc are all there and the scripts are modified to start, it will just work. I have not (yet) created a startup script for this yet. For the moment I will just start the VM and see whether there are updates and copy them over instead. Since I did not need to do additional changing on the main host, it is a very easy and straight forward process in this case.


There is no support for OX on FreeBSD. Ofcourse I would like to see at least some support to promote my favorite OS more, but that is a financial situation. It might not cost a lot to deliver the .tar.gz files so that we can package them and spread the usage of OX on more installations (and thus perhaps add revenue for OX as commercial installation), but it will cost FTE's to support more then that. If you see a commercial opportunity, please let them know so that this might be more and more realistic.

The documentation written above is just how I have setup the installation and I wanted to share it with you. I do not offer support on it, but ofcourse I am willing to answer questions you might have about the setup etc. I did not include the vhost configuration in it's entirely, if that is a popular request, I will add it to this post.

Open Questions to OX:

So as mentioned I have questioned OX for some choices:

  • Please use a more portable path for the Bash shell (#!/usr/bin/env bash)
  • Please allow the use of a different localbase (/usr/local/open-xchange for example)
  • Please allow FreeBSD packagers to fetch a "clean" .tar.gz, so that we can package this for OX and distribute it for our end-users.
  • Unrelated to the post above: Please allow the usage of external caldav/carddav providers


I have found another thing that I needed to change. I needed to use gsed (Gnu-sed) instead of FreeBSD-sed so that the listuser scripts work. Linux does that a bit differently but if you replace sed with gsed those scripts will work fine.

I have not yet got some feedback from OX.

by Remko Lodder at August 29, 2017 07:48 AM

August 20, 2017


Color themes support^Whack for vt(4)

I was updating my laptop to the latest HEAD today and noticed that my bash prompt looks ugly in default console color scheme. So what with one thing and another I ended up writing color themes support for vt(4). Just because it was fun thing to do. The idea is that you can redefine any ANSI color in console using variable in /boot/loader.conf, i.e.:

kern.vt.color.0.rgb="0,0,0" # color 0 is black
# or
kern.vt.color.15.rgb="#ffffff" # color 15 is white

Here is how my Tomorrow Night theme looks like:


It works only with framebuffer-based console like efifb or i915kms.
Patch: console-color-theme.diff

by gonzo at August 20, 2017 04:16 AM

April 11, 2017

Eric Anholt

This week in vc4 (2017-04-10): dmabuf fencing, meson

The big project for the last two weeks has been developing dmabuf fencing support for vc4.  Without dmabuf fences, when passing buffers between devices the user needs to manually wait for the job to finish on one (say, camera snapshot) before letting the other device get started (accumulating GL commands to texture from the camera snapshot).  That means leaving both devices idle for a moment while the CPU accumulates the command stream for the consumer, but the bigger pain is that it requires that the end user manage the synchronization.

With dma-buf fencing in the kernel, a "reservation object" generated by the dma-buf exporter tracks the fences of the various devices using the shared object, and then the device trivers get to look at that list and wait on on each others' fences when using it.

So far, I've got my reservations and fences being exported from vc4, so that pl111 display can wait for vc4 to be done before actually putting a new pageflip up on the screen.  I haven't quite hooked up the other direction, for camera capture into vc4 display or GL texturing (I don't have a testcase for this, as the current camera driver doesn't expose dmabufs), but it shouldn't be hard.

On the meson front, rendercheck is now converted to meson upstream.  I've made more progress on the X Server:  Xorg is now building, and even successfully executes Xorg -pogo with the previous modesetting driver in place.  The new modesetting driver is failing mysteriously.  With a build hack I got from the meson folks and some work from ajax, the sdksyms script I complained about in my last post isn't used at all on the meson build.  And, best of all, the meson devs have written the code needed for us to not even need the build hack I'm using.

It's so nice to be using a build system that's an actual living software project.

by anholt at April 11, 2017 12:48 AM

March 27, 2017

Eric Anholt

This week in vc4 (2017-03-27): Upstream PRs, more CMA, meson

Last week I sent pull requests for bcm2835 changes for 4.12.  We've got some DT updates for HDMI audio, DSI, and SDHOST, and defconfig changes to enable SDHOST.  The DT changes to actually enable SDHOST (and get wifi working on Pi3) won't land until 4.13.

I also wrote a patch to enable using more than 256MB of CMA memory (and not require any particular alignment).  The 256MB limit was due to a hardware bug: the binner's memory allocations get dereferenced with their top 4 bits set to the top 4 bits of the tile state data array's address.  Given that tile state allocations happen after CL setup (while the binner is running and throwing overflow interrupts), there was no way to guarantee that we could find overflow memory with the top bits matching.

The new solution, suggested by someone from the set top box group, is to allocate a single 16MB to 32MB buffer at HW init time, and return all of those types of allocations out of it, since it turns out you don't need much to complete rendering of any given scene.  I've been mulling over the details of a solution for a while, and finally wrote and tested the patch I wanted (tricky parts included freeing the memory when the hardware was idle, and how to track the lifetimes of the sub-allocations).  Results look good, and I'll be submitting it this week.

However, I spent most of the week on converting the X Server over to meson

Meson is a delightful new build system (based around Ninja on Linux) that massively speeds up builds, while also being portable to Windows (unlike autotools generally).  If you've ever tried to build the X stack on Raspberry Pi, you know that autotools is painfully slow.  It's also been the limiting factor for me in debugging my scripts for CI for the X Server -- something we'd really like to be doing as we hack on glamor or do refactors in the core.

So far all I've landed in this project is code deletion, as I find build options that aren't hooked up to anything, or code that isn't hooked up to build options.  This itself will speed up our builds, and ajax has been working in parallel on deleting a bunch of code that makes the build messier than it needs to be.  I've also submitted patches for rendercheck converting to meson (as a demo of what the conversion looks like), and I have Xephyr, Xvfb, Xdmx, and Xwayland building in the X Server with meson.

So far the only stumbling block for the meson conversion of the X Server is the sdksyms.c file.  It's the ugliest part of the build -- running the C preprocessor on a generated .c that #includes a bunch of .h tiles, then running the output of that through awk and trying to parse C using regular expressions.  This is, as you might guess, somewhat fragile.

My hope for a solution to this is to just quit generating sdksyms.c entirely.  Using ELF sections, we can convince the linker to not garbage collect symbols that it thinks are unused.  Then we get to just decorate symbols with XORG_EXPORT or XORG_EXPORT_VAR (unfortunately have to have separate sections for RO vs RW contents), and Xorg will have the correct set of symbols exported.  I started on a branch for this, ajax got it actually building, and now we just need to bash the prototypes so that the same set of symbols are exported before/after the conversion.

by anholt at March 27, 2017 10:43 PM

October 25, 2016

Murray Stokely

FreeBSD on Intel NUCs

I've been away from FreeBSD for a few years but I wanted some more functionality on my home network that I was able to configure with my Synology NAS and router. Specifically, I wanted:

  • a configurable caching name server that would serve up authoritative private names on my LAN and also validates responses with DNSSEC.
  • a more configurable DHCP server so I could make the server assign specific IPs to specific MAC addresses.
  • more compute power for transcoding videos for Plex.

Running FreeBSD 11 on an Intel NUC seemed like an ideal solution to keep my closet tidy. As of this week, $406.63 on Amazon buys a last generation i3 Intel NUC mini PC (NUC5I3RYH), with 8GB of RAM and 128GB of SSD storage. This was the first model I tried since I found reports of others using this with FreeBSD online, but I was also able to get it working on the newer generation i5 based NUC6i5SYK with 16GB of RAM and 256GB of SSD. The major issue with these NUCs is that the Intel wireless driver is not supported in FreeBSD. I am not doing anything graphical with these boxes so I don't know how well the graphics work, but they are great little network compute nodes.


I downloaded the FreeBSD 11 memory stick images, and was pleased to see that the device booted fine off the memory stick without any BIOS configuration required. However, my installation failed trying to mount root ("Mounting from ufs:/dev/ufs/FreeBSD_Install failed with error 19."). Installation from an external USB DVD drive and over the network with PXE both proved more successful at getting me into bsdinstaller to complete the installation.

I partitioned the 128GB SSD device with 8GB of swap and the rest for the root partition (UFS, Journaled and Soft Updates). After installation I edited /etc/fstab to add a tmpfs(5) mount for /tmp. The dmesg output for this host is available in a Gist on Github.

Warren Block's article on SSD on FreeBSD and the various chapters of the FreeBSD Handbook were helpful. There were a couple of tools that were also useful in probing the performance of the SSD with my FreeBSD workload:

  • The smartctl tool in the sysutils/smartmontools package allows one to read detailed diagnostic information from the SSD, including wear patterns.
  • The basic benchmark built into diskinfo -t reports that the SSD is transferring 503-510MB/second.
But how well does it perform in practice?

Rough Benchmarks

This post isn't meant to report a comprehensive suite of FreeBSD benchmarks, but I did run some basic tests to understand how suitable these low power NUCs perform in practice. To start with, I downloaded the 11-stable source from Subversion and measured the build times to understand performance of the new system. All builds were done with a minimal 2 line make.conf:


Build Speed

Build CommandEnvironmentReal Times
make -j4 buildkernel/usr/src and /usr/obj on SSD10.06 minutes
make -j4 buildkernel/usr/src on SSD, /usr/obj on tmpfs9.65 minutes
make -j4 buildworld/usr/src and /usr/obj on SSD1.27 hours
make buildworld/usr/src and /urs/obj on SSD3.76 hours


In addition to the build times, I also wanted to look more directly at the performance reading from flash and reading from the NFS mounted home directories on my 4-drive NAS. I first tried Bonnie++, but then ran into a 13-year old bug in the NFS client of FreeBSD. After switching to Bonnie, I was able to gather some reasonable numbers. I had to use really large file sizes for the random write test to eliminate most of the caching that was artificially inflating the results. For those that haven't seen it, Brendan Gregg's excellent blog post highlights some of the issues of file system benchmarks like Bonnie.

Average of 3 bonnie runs with 40GB block size
ConfigurationRandom I/OBlock InputBlock Output
Seeks/SecCPU UtilizationReads/secCPU UtilizationWrites/secCPU Utilization

The block input rates from my bonnie benchmarks on the SSD were within 5% of the value provided by the much quick and dirtier diskinfo -t test.

Running Bonnie with less than 40GB file size yielded unreliable benchmarks due to caching at the VM layer. The following boxplot shows the random seek performance during 3 runs each at 24, 32, and 40GB file sizes. Performance starts to even off at this level but with smaller file sizes the reported random seek performance is much higher.

Open Issues

As mentioned earlier, I liked the performance I got with running FreeBSD on a 2015-era i3 NUC5I3RYH so much that I bought a newer, more powerful second device for my network. The 2016-era i5 NUC 6i5SYK is also running great. There are just a few minor issues I've encountered so far:

  • There is no FreeBSD driver for the Intel Wireless chip included with this NUC. Code for other platforms exists but has not been ported to FreeBSD.
  • The memory stick booting issue described in the installation section. It is not clear if it didn't like my USB stick for some reason, or the port I was plugging into, or if additional boot parameters would have solved the issue. Documentation and/or code needs to be updated to make this clearer.
  • Similarly, the PXE Install instructions were a bit scattered. The PXE section of the Handbook isn't specifically targetting new manual installations into bsdinstall. There are a few extra things you can run into that aren't documented well or could be streamlined.
  • Graphics / X11 are outside of the scope of my needs. The NUCs have VESA mounts so you can easily tuck them behind an LCD monitor, but it is not clear to me how well they perform in that role.

by Murray ( at October 25, 2016 03:27 AM

April 07, 2016

FreeBSD Foundation

Introducing a New Website and Logo for the Foundation

The FreeBSD Foundation is pleased to announce the debut of our new logo and website, signaling the ongoing evolution of the Foundation identity, and ability to better serve the FreeBSD Project. Our new logo was designed to not only reflect the established and professional nature of our organization, but also to represent the link between the Project and the Foundation, and our commitment to community, collaboration, and the advancement of FreeBSD.

We did not make this decision lightly.  We are proud of the Beastie in the Business Suit and the history he encompasses. That is why you’ll still see him make an appearance on occasion. However, as the Foundation’s reach and objectives continue to expand, we must ensure our identity reflects who we are today, and where we are going in the future. From spotlighting companies who support and use FreeBSD, to making it easier to learn how to get involved, spread the word about, and work within the Project, the new site has been designed to better showcase, not only how we support the Project, but also the impact FreeBSD has on the world. The launch today marks the end of Phase I of our Website Development Project. Please stay tuned as we continue to add enhancements to the site.

We are also in the process of updating all our collateral, marketing literature, stationery, etc with the new logo. If you have used the FreeBSD Foundation logo in any of your marketing materials, please assist us in updating them. New Logo Guidelines will be available soon. In the meantime, if you are in the process of producing some new literature, and you would like to use the new Foundation logo, please contact our marketing department to get the new artwork.

Please note: we've moved the blog to the new site. See it here.

by Anne Dickison ( at April 07, 2016 04:40 PM

February 26, 2016

FreeBSD Foundation

FreeBSD and ZFS

ZFS has been making headlines lately, so it seems like the right time to talk about the longstanding relationship between FreeBSD and ZFS.

For nearly seven years, FreeBSD has included a production quality ZFS implementation, making it one of the key features of the FreeBSD operating system. ZFS is a combined file system and volume manager. Decoupling physical media from logical volumes allows free space to be efficiently shared between all of the file systems. ZFS introduced unprecedented data integrity and reliability guarantees to storage on FreeBSD. ZFS supports varying levels of redundancy for tolerance of hardware failures and includes cryptographic checksums on all data to guard against corruption.

Allan Jude, VP of Operations at ScaleEngine and coauthor of FreeBSD Mastery: ZFS, said “We started using ZFS in 2011 because we needed to safely store a huge quantity of video for our customers. FreeBSD was, and still is, the best platform for deploying ZFS in production. We now store more than a petabyte of video using ZFS, and use ZFS Boot Environments on all of our servers.”

So why does FreeBSD include ZFS and contribute to its continued development? FreeBSD community members understand the need for continued development work as technologies evolve. OpenZFS is the truly open source successor to the ZFS project and the FreeBSD Project has participated in OpenZFS since its founding in 2013. FreeBSD developers and those from Delphix, Nexenta, Joyent, the ZFS on Linux project, and the Illumos project work together to continue improving OpenZFS.

FreeBSD’s unique open source infrastructure, copyfree license, and engaged community support the integration of a variety of free software components, including OpenZFS. FreeBSD makes an excellent operating system for servers and end users, and it provides a foundation for many open source projects and commercial products.

We're happy that ZFS is available in FreeBSD as a fully integrated, first class file system and wish to thank all of those who have contributed to it over the years.

by Anne Dickison ( at February 26, 2016 03:23 PM

February 20, 2016

Joseph Koshy

ELF Toolchain v0.7.1

I am pleased to announce the availability of version 0.7.1 of the software being developed by the ElfToolChain project.

This release offers:
  • Better support of the DWARF4 format.
  • Support for more machine architectures.
  • Many bug fixes and improvements.
The release also contains experimental code for:
  • A library handling the Portable Executable (PE) format.
  • A link editor.
The release may be downloaded from SourceForge:
Detailed release notes are available at the URL mentioned above.

Many thanks to the project's supporters for their contributions to the project.

by Joseph Koshy ( at February 20, 2016 12:06 PM

January 25, 2015

Giorgios Keramidas

Some Useful RCIRC Snippets

I have started using rcirc as my main IRC client for a while now, and I really like the simplicity of its configuration. All of my important IRC options now fit in a couple of screens of text.

All the rcirc configuration options are wrapped in an eval-after-load form, to make sure that rcirc settings are there when I need them, but they do not normally cause delays during the startup of all Emacs instances I may spawn:

(eval-after-load "rcirc"


     (message "rcirc has been configured.")))

The “rcirc-setup-forms” are then separated in three clearly separated sections:

  • Generic rcirc configuration
  • A hook for setting up nice defaults in rcirc buffers
  • Custom rcirc commands/aliases

Only the first set of options is really required. Rcirc can still function as an IRC client without the rest of them. The rest is there mostly for convenience, and to avoid typing the same setup commands more than once.

The generic options I have set locally are just a handful of settings to set my name and nickname, to enable logging, to let rcirc authenticate to NickServ, and to tweak a few UI details. All this fits nicely in 21 lines of elisp:

;; Identification for IRC server connections
(setq rcirc-default-user-name "keramida"
      rcirc-default-nick      "keramida"
      rcirc-default-full-name "Giorgos Keramidas")

;; Enable automatic authentication with rcirc-authinfo keys.
(setq rcirc-auto-authenticate-flag t)

;; Enable logging support by default.
(setq rcirc-log-flag      t
      rcirc-log-directory (expand-file-name "irclogs" (getenv "HOME")))

;; Passwords for auto-identifying to nickserv and bitlbee.
(setq rcirc-authinfo '(("freenode"  nickserv "keramida"   "********")
                       ("grnet"     nickserv "keramida"   "********")))

;; Some UI options which I like better than the defaults.
(rcirc-track-minor-mode 1)
(setq rcirc-prompt      "»» "
      rcirc-time-format "%H:%M "
      rcirc-fill-flag   nil)

The next section of my rcirc setup is a small hook function which tweaks rcirc settings separately for each buffer (both channel buffers and private-message buffers):

(defun keramida/rcirc-mode-setup ()
  "Sets things up for channel and query buffers spawned by rcirc."
  ;; rcirc-omit-mode always *toggles*, so we first 'disable' it
  ;; and then let the function toggle it *and* set things up.
  (setq rcirc-omit-mode nil)
  (set (make-local-variable 'scroll-conservatively) 8192))

(add-hook 'rcirc-mode-hook 'keramida/rcirc-mode-setup)

Finally, the largest section of them all contains definitions for some custom commands and short-hand aliases for stuff I use all the time. First come a few handy aliases for talking to ChanServ, NickServ and MemoServ. Instead of typing /quote nickserv help foo, it’s nice to be able to just type /ns help foo. This is exactly what the following three tiny forms enable, by letting rcirc know that “/cs”, “/ms” and “/ns” are valid commands and passing-along any arguments to the appropriate IRC command:

;; Handy aliases for talking to ChanServ, MemoServ and NickServ.

(defun-rcirc-command cs (arg)
  "Send a private message to the ChanServ service."
  (rcirc-send-string process (concat "CHANSERV " arg)))

(defun-rcirc-command ms (arg)
  "Send a private message to the MemoServ service."
  (rcirc-send-string process (concat "MEMOSERV " arg)))

(defun-rcirc-command ns (arg)
  "Send a private message to the NickServ service."
  (rcirc-send-string process (concat "NICKSERV " arg)))

Next comes a nifty little /join replacement which can join multiple channels at once, as long as their names are separated by spaces, commas or semicolons. To make its code more readable, it’s split into 3 little functions: rcirc-trim-string removes leading and trailing whitespace from a string, rcirc-normalize-channel-name prepends “#” to a string if it doesn’t have one already, and finally rcirc-cmd-j uses the first two functions to do the interesting bits:

(defun rcirc-trim-string (string)
  "Trim leading and trailing whitespace from a string."
  (replace-regexp-in-string "^[[:space:]]*\\|[[:space:]]*$" "" string))

(defun rcirc-normalize-channel-name (name)
  "Normalize an IRC channel name. Trim surrounding
whitespace, and if it doesn't start with a ?# character, prepend
one ourselves."
  (let ((trimmed (rcirc-trim-string name)))
    (if (= ?# (aref trimmed 0))
      (concat "#" trimmed))))

;; /j CHANNEL[{ ,;}CHANNEL{ ,;}CHANNEL] - join multiple channels at once
(defun-rcirc-command j (arg)
  "Short-hand for joining a channel by typing /J channel,channel2,channel,...

Spaces, commas and semicolons are treated as channel name
separators, so that all the following are equivalent commands at
the rcirc prompt:

    /j demo;foo;test
    /j demo,foo,test
    /j demo foo test"
  (let* ((channels (mapcar 'rcirc-normalize-channel-name
                           (split-string (rcirc-trim-string arg) " ,;"))))
    (rcirc-join-channels process channels)))

The last short-hand command lets me type /wii NICK to get “extended” whois information for a nickname, which usually includes idle times too:

;; /WII nickname -> /WHOIS nickname nickname
(defun-rcirc-command wii (arg)
  "Show extended WHOIS information for one or more nicknames."
  (dolist (nickname (split-string arg " ,"))
    (rcirc-send-string process (concat "WHOIS " nickname " " nickname))))

With that, my rcirc setup is complete (at least in the sense that I can use it to chat with my IRC friends). There are no fancy bells and whistles like DCC file transfers, or fancy color parsing, and similar things, but I don’t need all that. I just need a simple, fast, pretty IRC client, and that’s exactly what I have now.

by keramida at January 25, 2015 08:41 AM

January 07, 2015

Murray Stokely

AsiaBSDCon 2014 Videos Posted (6 years of BSDConferences on YouTube)

Sato-san has once created a playlist of videos from AsiaBSDCon. There were 20 videos from the conference held March 15-16, 2014 and papers can be found here. Congrats to the organizers for running another successful conference in Tokyo. A full list of videos is included below. Six years ago when I first created this channel videos longer than 10 minutes couldn't normally be uploaded to YouTube and we had to create a special partner channel for the content. It is great to see how the availability of technical video content about FreeBSD has grown in the last six years.

by Murray ( at January 07, 2015 11:22 PM

December 26, 2013

Giorgios Keramidas

Profiling is Our Friend

I recently wrote a tiny demo program to demonstrate to co-workers how one can build a cache with age-based expiration of its entries, using purely immutable Scala collections. The core of the cache was something like 25-30 lines of Scala code like this:

class FooCache(maxAgeMillis: Long) {
  def now: Long = System.currentTimeMillis

  case class CacheEntry(number: Long, value: Long,
                        birthTime: Long) {
    def age: Long = now - birthTime

  lazy val cache: AtomicReference[Hashmap[Long, CacheEntry]] =
    new AtomicReference(HashMap[Long, CacheEntry]]())

  def values: Hashmap[Long, CacheEntry] =
    cache.get.filter{ (key, entry) =>
      entry.age <= maxAgeMillis }

  def get(number: Long): Long = {
    values.find{ case (key, entry) =>
      key == number && entry.age <= maxAgeMillis
    } match {
      case Some((key, entry)) =>
        entry.value                // cache hit
      case _ =>
        val entry = CacheEntry(number, compute(number), now)
        cache.set(values + (number -> entry))

  def compute(number: Long): Long =
    { /* Some long-running computation based on 'number' */ }

The main idea here is that we keep an atomically updated reference to an immutable HashMap. Every time we look for entries in the HashMap we check if (entry.age <= maxAgeMillis), to skip over entries which are already too old to be of any use. Then on cache insertion time we go through the ‘values’ function which excludes all cache entries which have already expired.

Note how the cache itself is not ‘immutable’. We are just using an immutable HashMap collection to store it. This means that Scala can do all sorts of optimizations when multiple threads want to iterate through all the entries of the cache looking for something they want. But there’s an interesting performance bug in this code too…

It’s relatively easy to spot once you know what you are looking for, but did you already catch it? I didn’t. At least not the first time I wrote this code. But I did notice something was ‘odd’ when I started doing lookups from multiple threads and looked at the performance stats of the program in a profiler. YourKit showed the following for this version of the caching code:

JVM Profile #1

See how CPU usage hovers around 60% and we are doing a hefty bunch of garbage collections every second? The profiler quickly led me to line 17 of the code pasted above, where I am going through ‘values’ when looking up cache entries.

Almost 94% of the CPU time of the program was spent inside the .values() function. The profiling report included this part:

|                           Name                            | Time   | Time |
|                                                           | (ms)   | (%)  |
| demo.caching                                              | 62.084 | 99 % |
| +-- d.caching.Numbers.check(long)                         | 62.084 | 99 % |
|   +-- d.caching.FooCacheModule$FooCache.check(long)       | 62.084 | 99 % |
|     +---d.caching.FooCacheModule$FooCache.values()        | 58.740 | 94 % |
|     +---scala.collection.AbstractIterable.find(Function1) |  3.215 |  5 % |

We are spending far too much time expiring cache entries. This is easy to understand why with a second look at the code of the get() function: every cache lookup does old entry expiration and then searches for a matching cache entry.

The way cache-entry expiration works with an immutable HashMap as the underlying cache entry store is that values() iterates over the entire cache HashMap, and builds a new HashMap containing only the cache entries which have not expired. This is bound to take a lot of procesisng power, and it’s also what’s causing the creation of all those ‘new’ objects we are garbage collecting every second!

Do we really need to construct a new cache HashMap every time we do a cache lookup? Of course not… We can just filter the entries while we are traversing the cache.

Changing line 17 from values.find{} to cache.get.find{} does not do cache-entry expiration at the time of every single lookup, and now our cache lookup speed is not limited by how fast we can construct new CacheEntry objects, link them to a HashMap and garbage-collect the old ones. Running the new code through YourKit once more showed an immensely better utilization profile for the 8 cores of my laptop’s CPU:

JVM Profile #2

Now we are not spending a bunch of time constructing throw-away objects, and garbage collector activity has dropped by a huge fraction. We can also make much more effective use of the available CPU cores for doing actual cache lookups, instead of busy work!

This was instantly reflected at the metrics I was collecting for the actual demo code. Before the change, the code was doing almost 6000 cache lookups per second:

-- Timers -------------------------------
             count = 4528121
         mean rate = 5872.91 calls/second
     1-minute rate = 5839.87 calls/second
     5-minute rate = 6053.27 calls/second
    15-minute rate = 6648.47 calls/second
               min = 0.29 milliseconds
               max = 10.25 milliseconds
              mean = 1.34 milliseconds
            stddev = 1.45 milliseconds
            median = 0.62 milliseconds
              75% <= 0.99 milliseconds
              95% <= 4.00 milliseconds
              98% <= 4.59 milliseconds
              99% <= 6.02 milliseconds
            99.9% <= 10.25 milliseconds

After the change to skip cache expiration at cache lookup, and only do cache entry expiration when we are inserting new cache entries, the same timer reported a hugely improved speed for cache lookups:

-- Timers -------------------------------
             count = 27500000
         mean rate = 261865.50 calls/second
     1-minute rate = 237073.52 calls/second
     5-minute rate = 186223.68 calls/second
    15-minute rate = 166706.39 calls/second
               min = 0.00 milliseconds
               max = 0.32 milliseconds
              mean = 0.02 milliseconds
            stddev = 0.02 milliseconds
            median = 0.02 milliseconds
              75% <= 0.03 milliseconds
              95% <= 0.05 milliseconds
              98% <= 0.05 milliseconds
              99% <= 0.05 milliseconds
            99.9% <= 0.32 milliseconds

That’s more like it. A cache lookup which completes in 0.32 milliseconds for the 99-th percentile of all cache lookups is something I definitely prefer working with. The insight from profiling tools, like YourKit, was instrumental in both understanding what the actual problem was, and verifying that the solution actually had the effect I expected it to have.

That’s why profiling is our friend!

by keramida at December 26, 2013 04:38 AM

September 25, 2012

Joseph Koshy

New release: ELF Toolchain v0.6.1

I am pleased to announce the availability of version 0.6.1 of the software being developed by the ElfToolChain project.

This new release supports additional operating systems (DragonFly BSD, Minix and OpenBSD), in addition to many bug fixes and documentation improvements.

This release also marks the start of a new "stable" branch, for the convenience of downstream projects interested in using our code.

Comments welcome.

by Joseph Koshy ( at September 25, 2012 02:25 PM

April 02, 2010

Henrik Brix Andersen

Downloading Sony GPS Assist Data Manually

After having bought a new Sony DSC-HX5V digital camera, which is equipped with an integrated GPS, I discovered that it comes with windows-only software for downloading and updating the GPS almanac on the camera (the supplied PMB Portable software runs on Apple OS X, but it does not support downloading the GPS almanac).

After tinkering a bit with tcpdump(1) and friends I found out how to perform the download and update manually:

  1. Download assistme.dat
  2. Download assistme.md5
  3. Verify that the MD5 sum of the assistme.dat file matches the one in the assistme.md5 file
  4. Create a top-level folder hierarchy on the memory card for the camera (not the internal memory of the camera) called PRIVATE/SONY/GPS/
  5. Place the downloaded assistme.dat file in the PRIVATE/SONY/GPS/ folder
  6. Place the memory card in the camera and verify that the GPS Assist Data is valid

I have written a small perl script for automating the above tasks. The script takes the mount point of the memory card as argument.

by brix at April 02, 2010 06:31 PM

March 21, 2010

Henrik Brix Andersen

Monitoring Soekris Temperature through SNMP

Here’s a quick tip for monitoring the temperature of your Soekris net4801 through SNMP on FreeBSD:

Install the net-mgmt/bsnmp-ucd and sysutils/env4801 ports and add the following to /etc/snmpd.conf:

begemotSnmpdModulePath."ucd" = "/usr/local/lib/"
extNames.0 = "temperature"
extCommand.0 = "/usr/local/sbin/env4801 | /usr/bin/grep ^Temp | /usr/bin/cut -d ' ' -f 6"

Enable and start bsnmpd(1). The temperature of your Soekris net4801 can now be queried through UCD-SNMP-MIB::extOutput.0 OID (

by brix at March 21, 2010 11:39 AM