Planet FreeBSD

December 01, 2019

Gonzo

Audio for RK3399

Last two weeks I’ve been working on audio support for Firefly-RK3399. Full support requires a number of things that are not quite there or not available in the mainline FreeBSD kernel. The main low-level hardware functionality consists of two parts: I2S block in the SoC and RT5640 audiocodec that converts digital audio to an analog signal. They talk to each other using the I2S protocol. A little bit higher is FDT virtual “devices” called simple-audio-card. This part is responsible for coordinating the setup of both hardware components: make sure they agree on a number of channels, a number of bits per sample and clock specifics of the I2S protocol. There is no code for it in the FreeBSD kernel, so I had to just hardcode these things in both hardware drivers.

The other obviously missing part was I2S clocks in the CRU unit drivers. This was easily fixable by just consulting with RK3399 TRM. Still, having added the clocks support I couldn’t get the signal on the physical pin. Thanks to manu@ who pointed out that some stuff might be missing in io-domain and power regulators area it was resolved too after setting bit 1 of GRF_IO_VSEL register

With all these bits in place, I was able to get sound out of headphones, but it was distorted. My first instinct was to blame mismatch of clocks/formats between I2S and codec, so I spent two very frustrating days experimenting with setting polarities and data formats just to find out that there is GPIO that controls headphone output on Firefly-RK3399. This is Firefly-specific bit and only referenced in the Firefly fork of the Rockchip fork of the Linux kernel. The way it’s implemented loud noises still can pass through the filter to headphones, so at max volume, you can here laud parts as a series of pops and grunts. By manually configuring GPIO4_C5 (gpioc4/pin21) I finally was able to get clear sound out of the Firefly.

On the bright side now I know how to convert an I2S stream captured by a Saleae logic analyzer to wav file: https://github.com/roel0/PCM2Wav-py

The next step is to check how much work would it take to implement the simple-audio-card part. The (very hacky) WIP is at https://github.com/gonzoua/freebsd/commits/rk3399_audio

by gonzo at December 01, 2019 03:46 AM

November 25, 2019

FreeBSD

July-September 2019 Status Report

The July to September 2019 Status Report is now available.

November 25, 2019 08:00 AM

November 17, 2019

Colin Percival

Some notes on userspace routing

For reasons which will be immediately apparent to anyone who has read my earlier blog post about the EC2 Instances Metadata Service (and its use by IAM Roles), I recently decided that I wanted to intercept outgoing IP packets which had a destination of 169.254.169.254; in some cases I want to redirect or block them, and in other cases I want them to proceed unimpeded. To make things harder, I had two more constraints:
  1. I don't want to write any new kernel code, since venturing into the kernel introduces a much wider range of potential adverse outcomes if my code is buggy, and
  2. I don't want to make use of firewalls, since users might have their own firewall rulesets which could conflict with EC2 IMDS-filtering rules; also, traversing a firewall — even one with a trivial ruleset — has a cost which can become nontrivial for the sort of high-bandwith applications which FreeBSD excels at. (For the same reasons, I'm less than enthusiastic about the suggestion in Amazon's documentation that users consider using local firewall rules to restrict access to the Instance Metadata Service.)

While I like to consider myself an experienced FreeBSD developer, networking is not my area of expertise; so I spent a significant amount of time flailing wildly and reading often wildly-out-of-date documentation while trying to figure this out. In the hope of helping the next person who wants to do something like this, here's some notes about what worked and what didn't.

November 17, 2019 02:00 AM

November 04, 2019

FreeBSD

FreeBSD 12.1-RELEASE Available

FreeBSD 12.1-RELEASE is now available. Please be sure to check the Release Notes and Release Errata before installation for any late-breaking news and/or issues with 12.1. More information about FreeBSD releases can be found on the Release Information page.

November 04, 2019 08:00 AM

Adrian Chadd

HF phasing noise eliminators, or "wow ok this mostly works"

I have a long standing issue with noise from a neighbour down the road that generates super wideband noise on the HF and low VHF bands. It's leaking out into the powerlines. It's pretty amazing.

Ok, so whilst the FCC goes and figures this out with the house in question, what can I do?

Well, one thing is to use a HF phasing noise eliminator. It's basically a pair of amplifiers and a phase inverting mixer. The "noise" antenna is a low gain antenna that's picking up the noise you want to filter out, but not sensitive enough to actually act as a longer distance HF antenna. You adjust the gain of your main input antenna and the gain of the noise antenna to be mostly the same level and then adjust the phase delay to cancel out the local noise source.

I bought a kit from Russia (http://www.ra0sms.ru/) and .. well, it worked. Pretty well. I used a discone antenna with a vertical radiator for 25/50MHz operation as my "very deaf" noise reception antenna and mixed in the signal from my HF antennas and .. well, it worked fine on 7MHz and 3.5MHz. I think it's a bit too deaf for 3.5MHz operation though; I bet it'd work better on higher frequencies because my noise antenna can start picking up more of the local area noise.

What about 1.8MHz? The 160m band?

Well, this is where the fun begins. The short version is - all I heard was lots of AM radio stations where they shouldn't be. The long version is - I am surrounded by super loud (>10kW) AM radio stations that you definitely need a bandpass filter for. The HF radios I have are good enough to filter them out, but the noise eliminator? It's doing all its mixing before there's any band filtering, so it's also mixing in all the AM broadcast band crap.

So this points out another couple things I need to do - I need to add a little bandpass filter on the noise antenna, and I need a proper HF preselector (ie, a narrow adjustable bandpass filter) on the main radio receiver. That way the transistors doing amplification/mixing in this phaser doesn't get swamped by crap that you're not trying to receive.

(For those who are radio inclined - I'm a few miles from 1100KHz (KFAX) which is a 50KW AM radio station - which shows up as S9+30 or more here. I have a bunch of others that show up as s9+20; so I have a lot of AM noise here..)

by Adrian (noreply@blogger.com) at November 04, 2019 04:21 AM

October 27, 2019

Warner Losh

EuroBSDCon Talk is up

My EuroBSDCon talk on the fist 10 years of Unix is up. EuroBSDCon Unix at 50, V7 at 40 I hope you enjoy.

by Warner Losh (noreply@blogger.com) at October 27, 2019 04:35 PM

October 17, 2019

Colin Percival

Better Canadian poll aggregation

On Tuesday, I wrote about how Canadian poll aggregators suck — in particular, pointing out the common ways that their methodologies fail. At the end of the post, I said that we could do better; here's the details of how.

October 17, 2019 10:20 AM

October 14, 2019

Warner Losh

Video Footage of the first PDP-7 to run Unix

Hunting down Ken's PDP-7: video footage found

In my prior blog post, I traced Ken's scrounged PDP-7 to SN 34. In this post I'll show that we have actual video footage of that PDP-7 due to an old film from Bell Labs. this gives us almost a minute of footage of the PDP-7 Ken later used to create Unix.

The Incredible Machine

The Incredible Machine is a Bell Labs film released in 1968 and available on youtube here: https://www.youtube.com/watch?v=iwVu2BWLZqA It outlines a number of innovative computer things that Bell Labs was doing around audio and visual things with different PDP machines from DEC. Pretty cool, right? Especially because this film features the song "Daisy" sung by a computer, a plot point that would feature heavily in Stanley Kubrick's 2001: A Space Odyssey. (although that plot point was set in 1962, and was based on work done by IBM with the first song sung by a computer).

I'll concentrate on footage from 9:19 to 10:31 in the film. This footage talks about making computer made music. If you listen to the audio, it sounds quite quaint, although when it was made it was cutting edge.

Making the case that it's a PDP-7

Here's a screen shot from 9:43 in The Incredible Machine. From it we can make the case that we're looking at a PDP-7, hereafter called TIM PDP-7.
The screen shot looks a little boring until we compare it against two photos of PDP-7s from the archives. The first one is photo from a DEC PDP-7 sales catalog that's available online at https://www.soemtron.org/pdp7.html. The second photo is from SN115 from a machine in Oslo from the Institute of Physics that's been semi-restored (picture also from Soemtron).
I've superimposed the three photos together and highlighted 4 areas of convergence with numbers:
  1. The register panel that reports the status of an expansion cabinet. This is clearly visible in both photos in similar places.
  2. The control panel. It's clearly the same between these two photos. The control panel is used to examine and modify memory contents of the system as well as displaying internal registers of the PDP-7.
  3. The paper tape reader (option 444B). This reader is also visible from 9:19 to 9:30 in The Incredible Machine reading in a new program.
  4. Is the PDP-7 name badge. Although it's quite obscure in these photos, its clearly the same.
So, I think it is safe to conclude that the computer in this footage is a PDP-7. We have two different pictures of actual PDP-7s that the computer in The Incredible Machine clearly corresponds to. I'll leave it as an exercise for the reader to exclude all the other machines from that era, though my experience suggests that the register and control panels should be enough.

Hunting the Serial Number for this PDP-7

So we have found footage of a PDP-7 from Bell Labs. That's cool, can we push the envelope further and track down which serial number TIM PDP-7 might be? Let's look at the key features of the machine in the picture above and the video footage.
  • Option 444B, paper tape reader (Also seen in 9:19-9:30 in The Incredible Machine)
  • Option 340 Display (seen 10:06-10:14 and 10:22-10:25)
  • Option 370 High Speed Light Pen (seen 10:06-10:25 as well)
So, if we look at the PDP-7 field service list available at https://www.soemtron.org/downloads/decinfo/18bitservicelist1972.pdf (itself a excerpt of a more complete one at bitsavers), we find there's two machines with the display and light pen: SN 34 and SN149.

Ken's machine (SN34) has all these options:
The other candidate machine (SN149) in the list has them as well:

So how can we decide which is which?

If we look at dates, we see that the SN34 machine was in place early enough to be in a 1968 film with an installation date of 1965. SN149 appears to be too late with a 1969 date. However, that's not conclusive. The other fields are blank and SN148 and SN150 both have 1967 dates. It's weakly suggestive, so we need more. We can't eliminate it based on dates, as pleasing as it would be to do so.

We may be able to eliminate the TIM PDP-7 as SN149 because TIM PDP-7 clearly had the Option 444B paper tape reader, and SN149 doesn't list that in the field service log. Based on this we can exclude SN149, but only weakly because the paper tape readers were common.

Can we make the case stronger? The service logs show that SN149 has a Option 550/TU55 which is a DECtape and controller, while the SN34 does not. Ken Thompson has confirmed there was no DECtape, just paper tape on the machine he used. If we could confirm this machine didn't have a DECtape, our case would be strong for it being SN34.

Looking at the footage is hard because it is so dark. Even so, we can see a blank panel over the Option 444B paper tape reader shown starting at 9:19, though it's hard to be sure. If we look at the 9:43 frame above, we can't tell. When the color balance is adjusted we see the following:
we can clearly see here the card reader from the initial footage and what appears to be a blank panel above. There's no tell-tale circles that would indicate an installed DECtape there. Single stepping the video with this enhancement shows no other targets. There is something weird just over the younger gentleman's head, but it's not a DECtape.

Looking at the field log, the DECtape components were serviced in 1969, after this film was made. It's not clear if this was when the parts were added, or if they were merely repaired or replaced. After studying the field service log for a while, I thing we'd bias our data towards replacement rather than installation. Especially since there's no other bulk input media, like a paper-tape, listed.

Pulling it all together: we have clearly found a PDP-7. There were only 4 PDP-7s shipped to AT&T. Only two had the 340 display option clearly seen in the film. Of those two, one had DECtape, the other had a paper-tape reader. We know from Ken that his had a paper-tape reader. There's no DECtape evident in this film, but clear evidence of the paper-tape reader. It's not known where either SN34 or SN149 lived inside Bell Labs, but we know that Ken used a machine that had been cast off from the Visual and Acoustics Department. While the film doesn't list the internal departments that contributed to it, the computer generated music strongly suggests it could have been the Visual and Acoustics Department. Taken together, we can say that three lines of evidence support that the PDP-7 in The Incredible Machine from 9:19 to 10:30 would later be used by Ken to create Unix.

by Warner Losh (noreply@blogger.com) at October 14, 2019 05:52 PM

September 28, 2019

Gonzo

Recreating Sneakers scene

(Not a FreeBSD topic)

A long time ago, in my teens, I watched movie Sneakers and was very impressed by it. It was exactly how I imagined hackers at work: bare PCBs, signal probes, de-scrambling encrypted information right on the screen. I was so impressed by the latter part that I went to re-create a bit of “No more secrets” scene using the only technologies I knew back then: Turbo Pascal running on MS-DOS. It wasn’t an exact replica but it was close enough and I was quite happy with the result. The program itself hasn’t survived my numerous moves from one apartment to another and got lost along with all the floppy discs sometime in the early aughts.

Recently I re-watched the movie and turned out that it holds up pretty well and still fun to watch. I thought that it would be a fun experience to re-create my small demo and check if I still remember any of the Pascal and DOS APIs. Equipped with Pascal for PC book published in ’91, some MS-DOS guides from the internet and a Guinness 4-pack I walked a short walk down the memory lane and it was fun. The result is published on Github. The demo in action:

by gonzo at September 28, 2019 11:57 PM

Adrian Chadd

Fixing up KA9Q-unix, or "neck deep in 30 year old codebases.."

I'll preface this by saying - yes, I'm still neck deep in FreeBSD's wifi stack and 802.11ac support, but it turns out it's slow work to fix 15 year old locking related issues that worked fine on 11abg cards, kinda worked ok on 11n cards, and are terrible for these 11ac cards. I'll .. get there.

Anyhoo, I've finally been mucking around with AX.25 packet radio. I've been wanting to do this since I was a teenager and found out about its existence, but back in high school and .. well, until a few years ago really .. I didn't have my amateur radio licence. But, now I do, and I've done a bunch of other stuff with a bunch of other radios. The main stumbling block? All my devices are either Apple products or run FreeBSD - and none of them have useful AX.25 stacks. The main stacks of choice these days run on Linux, Windows or are a full hardware TNC.

So yes, I was avoiding hacking on AX.25 stuff because there wasn't a BSD compatible AX.25 stack. I'm 40 now, leave me be.

But! A few weeks ago I found that someone was still running a packet BBS out of San Francisco. And amazingly, his local node ran on FreeBSD! It turns out Jeremy (KK6JJJ) ported both an old copy of KA9Q and N0ARY-BBS to run on FreeBSD! Cool!

I grabbed my 2m radio (which is already cabled up for digital modes), compiled up his KA9Q port, figured out how to get it to speak to Direwolf, and .. ok. Well, it worked. Kinda.

Here's my config:

ax25 mycall CALLSIGN-1
ax25 version 2
ax25 maxframe 7
attach asy 127.0.0.1:8001 kissui tnc0 65535 256 1200

.. and it worked. But it wasn't fast. I mean, sure, it's 1200 bps data, but after digging I found some very bad stack behaviour on both KA9Q and N0ARY. So, off I went to learn about AX.25.

And holy hell, there are some amusing bugs. I'll list the big showstoppers first and then what I think needs to happen next.

Let's look at the stack behaviour first. So, when doing LAPB over AX.25, there's a bunch of frames with sequence numbers that go out, and then the receiver ACKs the sequence numbers four ways:
  • RR - "roger roger" - yes, I ack everything up to N-1
  • RNR - I ack everything up to N-1 but I'm full; please stop sending until I send something back to start transmission up again
  • REJ - I received an invalid or missing sequence number, ACK everything to N-1 and retransmit the rest please
  • I - this is a data frame which includes both the send and receive sequence numbers. Thus, transmitted data can implicitly ACK received data.
I'd see bursts like this:
  • N0ARY would send 7 frames
  • I'd receive them, and send individual RR's for each of them
  • N0ARY would then send another 7 frames
  • I'd receive a few, maybe I'd get a CRC error on one or miss something, and send a REJ, followed by getting what I wanted or not and maybe send an RR
  • N0ARY would then start sending many, many more copies of the same frame window, in a loop
  • I'd individually ACK/REJ each of these appropriately
  • .. and it'd stay like this until things eventually caught up.


So, two things were going wrong here.

Firstly - KA9Q didn't implement the T2 timer in the AX.25 v2.0 spec. T2 is an optional timer which a TNC can use to delay sending responses until it expires, allowing it to batch up sending responses instead of responding (eg RR'ing) each individual frame. Now, since the KISS TNC only sends data and not signaling up to the applications, all the applications can do is queue frames in response to other frames or fire off timers to do things. The KA9Q stack doesn't know that the air is busy receiving more data - only that it received a frame. So, T2 could be used to buffer sending status updates until it expires.

N0ARY-BBS implements T2 for RR/RNR responses, but not for REJ responses.

Then, both KA9Q and N0ARY-BBS don't delay sending LAPB frames upon status notifications. Thus, every RR, RNR and REJ that is received may trigger sending whatever is left in the transmit window. Importantly, receiving a REJ will clear the "unack" (unacknowledged) window and force retransmission of everything. If you get a couple of REJ's in a row then it'll try to send multiple sets of the same window out, over and over. If you get an RR and REJ and RR, it may send more data, then the whole window, then more data. It's crazy.

Finally, there's T1. T1 is the retransmisison timer. Now, transmitting a burst of 7 frames of full length at 1200 baud again takes around 2.2 seconds a frame, so it's around 15.4 seconds for the full burst. If T1 is less than that, then because there's no feedback about when the frames went out - only that you sent them to the TNC - you'll start to try retransmitting things. Now, luckily one can poll the other end using a RR poll frame to ask the other end to respond with its current sequence number - that way each end can re-establish what each others send/receive sequence numbers are. However, these can also be batched up - so whilst you're sending your frames, T1 fires generating another batch of RR's to poke the other side. This in itself isn't such a bad thing, but it does mean the receiver sees a big, long burst of frames followed by a handful of RR polls. Strictly speaking this isn't ideal - you're only supposed to send a single poll and then not poll until you get a response or another timeout.

So what have I done?

I'm doing what JNOS2 (another KA9Q port) is doing - I am using T2 for data transmission, RR, RNR and REJ transmission. It's not a pretty solution, but it at least stops the completely pointless retransmission of a lot of wasted data. I've patched both N0ARY and KA9Q to do this, so once KE6JJJ gets a chance to update his BBS and the N0ARY BBS I am hoping that the bad behaviour stops and the BBS becomes useful again for multiple people.

Ok, so what needs to happen?

Firstly, we've learnt a lot about networking since the 80s and 90s. AX.25 is kinda part TCP, part IP, so you'd think it should be fine as a timer based protocol. But alas no - it's slow, it's mostly half duplex, and overflowing transmit queues or resending data incorrectly has a real cost. It's better to look at it as a reliable wireless protocol like what 802.11 does, and /not/ as TCP/IP. 802.11 has timers, 802.11 has sequence numbers, and 802.11 tries to provide as reliable a stream as it can. But it doesn't guarantee traffic; if traffic takes too long it'll just time it out and let the upper layer (like TCP) handle the actual guarantees. Now, you kind want the AX.25 LAPB sessions to be as reliable as possible, but this gets to the second point.

You need to figure out how to be fair between sessions. The KA9Q stacks right now don't schedule packets based on any kind of fairness between LAPB or AX.25 queues. The LAPB code will try to transmit stuff based only on its local window and I guess they treat retransmits as something that signals they need to back off. That's all fine and dandy in theory but in practice at 1200 bps a 7 packet window at 256 bytes a packet is 7*2.2 seconds, or 15.4 seconds. So after 15.4 seconds if the remote side immediately ACKs and you then send another 7 packet burst, noone is going to really get a chance to come in and talk on the BBS.

So, this needs a couple things.

Firstly, just because you can transmit a maximum window of 7 doesn't mean you should. If you see the air being busy, maybe you want to back that off a bit to let others get in and talk. Yes, it does mean the channel is being used less efficiently in total for a single session, but now you're allowing other sessions to get airtime and actually interact. Bugs aside, I've managed to keep the N0ARY BBS tied up for minutes at a time squeezing me tens of kilobytes of article content. That's all fine and dandy, but I don't mind if it takes a little longer when there are other users trying to also do stuff.

Next, the scheduling for LAPB shouldn't just be timers kicking off packet generation into a queue, and then best effort transmission at the KISS TNC. If the AX.25 stack was told about the data link status transitions - ie, between idle, receiving, transmitting and such - then when the air was free the TNC could actually schedule which LAPB session(s) to service next. I've watched T1 (retransmission) and T2 kick over multiple times during someone else downloading data, and when the air is eventually busy an AX.25 node sends multiple copies of both I data payload and S status frames (RR, RNR, REJ, probes, etc.) It's insane. The only reason it's doing this is because it doesn't know the TNC is busy receiving or transmitting and thus those timers don't need to run and traffic doesn't need to be generated. This is how the MAC and PHY layers in 802.11 interoperate. The MAC doesn't queue lots of packets to be sent out when the PHY is ready - the MAC has the work there, and when the PHY signals the air is free and the contention window timer is expired, the MAC signals to get the air and sends its frame. It sends what it can in its given time window and then it stops.

This means that yes, the KISS TNC popularity is part of the reason AX.25 is inefficient these days. KISS TNCs make it easy to do AX.25 packets, but they make it super easy to do it inefficiently. The direwolf author wrote a paper on this where he compared these techniques to just using the AX.25 stack (and AX.25 2.2 features) which have knowledge of the direwolf physical/radio layer. If these hooks were made available over the KISS TNC interface - and honestly, they'd just be a two byte status notification saying that the TNC is in the { idle, receiving, decoding, transmitting } states - then AX.25 stacks could make much, much smarter decisions about what to transmit and when.

Finally - wow this whole thing needs per packet compression already. AX.25 version 2.2 introduces a way of negotiating parameters with remote TNCs for supported extensions and so one of my medium term KA9Q/N0ARY goals is to introduce enough version 2.2 support to negotiate SREJ (selective rejection/retransmission) and maybe the window size options, but primarily to add compression. I think SREJ + per packet compression would be the biggest benefits over 1200 and 9600 bps links.

If you're interested, software repositories are located below. I encourage people to contribute to the KE6JJJ work; I'm just forked off of it (github username erikarn) and I'll be pushing improvements there.

Oh, and these compile on FreeBSD. KA9Q and direwolf both compile and run on MacOSX but N0ARY-BBS doesn't yet do so. Yes, this does mean you can now do packet radio on FreeBSD and MacOSX.



by Adrian (noreply@blogger.com) at September 28, 2019 05:11 AM

January 27, 2019

Alexander Leidinger

Strategic thinking, or what I think what we need to do to keep FreeBSD relevant

Since I participate in the FreeBSD project there are from time to time some voices which say FreeBSD is dead, Linux is the way to go. Most of the time those voices are trolls, or people which do not really know what FreeBSD has to offer. Sometimes those voices wear blinders, they only see their own little world (were Linux just works fine) and do not see the big picture (like e.g. competition stimulates business, …) or even dare to look what FreeBSD has to offer.

Sometimes those voices raise a valid concern, and it is up to the FreeBSD project to filter out what would be beneficial. Recently there were some mails on the FreeBSD lists in the sense of “What about going into direction X?”. Some people just had the opinion that we should stay were we are. In my opinion this is similarly bad to blindly saying FreeBSD is dead and following the masses. It would mean stagnation. We should not hold people back in exploring new / different directions. Someone wants to write a kernel module in (a subset of) C++ or in Rust… well, go ahead, give it a try, we can put it into the Ports Collection and let people get experience with it.

This discussion on the mailinglists also triggered some kind of “where do we see us in the next years” / strategic thinking reflection. What I present here, is my very own opinion about things we in the FreeBSD project should look at, to stay relevant in the long term. To be able to put that into scope, I need to clarify what “relevant” means in this case.

FreeBSD is currently used by companies like Netflix, NetApp, Cisco, Juniper, and many others as a base for products or services. It is also used by end-users as a work-horse (e.g. mailservers, webservers, …). Staying relevant means in this context, to provide something which the user base is interested in to use and which makes it more easy / fast for the user base to deliver whatever they want or need to deliver than with another kind of system. And this in terms of time to market of a solution (time to deliver a service like a web-/mail-/whatever-server or product), and in terms of performance (which not only means speed, but also security and reliability and …) of the solution.

I have categorized the list of items I think are important into (new) code/features, docs, polishing and project infrastructure. Links in the following usually point to documentation/HOWTOs/experiences for/with FreeBSD, and not to the canonical entry points of the projects or technologies. In a few cases the links point to an explanation in the wikipedia or to the website of the topic in question.

Code/features

The virtualization train (OpenStack, OpenNebula, oVirt, CloudStack, Kubernetes, Docker, Podman, …) is running on full speed. The marked is as big/important, that solution providers even do joint ventures on crossing borders between each others, e.g. VMware is opening up to integrate their solution with solutions from Amazon/Azure/Google. The underlying infrastructure is getting more and more unimportant, as long as the services which shall be run perform as needed. Ease of use and time to market are the key-drivers (the last little piece of performance is mostly important for companies which go to the “edge” (both meanings intended in a non-exclusive-or way) like Netflix for their FreeBSD based CDN). FreeBSD is not really participating in this world. Yes, we had jails way before anyone else out there had something smilar, and some even do not have that right now. But if you are realistic, FreeBSD does not play a major role here. You can do nice things with jails and bhyve, but you have to do it “by hand” (ezjail, iocage and such are improvements on the ease of use side, but that is not enough as this is still limited on a host centric view). The world has moved on to administering a datacenter (to avoid the buzzwords “cloud” or “private-cloud”) with a single-click. In my opinion we would need to port several of the initialy mentioned cloud/container management solutions to FreeBSD and have them able to handle their work via jails and/or bhyve. If FreeBSD is not able to serve as a building block in this big picture, we will fall off the edge in this particular IT-area in the long run.

With all the ready-made containers available in the internet, we should improve our linuxolator. Our kernel-support for this is limited to a 2.6.32-ish ABI version (it is less than 2.6.32, more like 2.6.16, we are missing epoll and inotify support, among others, but this is the lowest version glibc in the CentOS 7 based linux_base port is able to run on… and glibc checks the version number). We need to catch-up to a more recent version if we want to be able to run those ready-made linux containers without issue (we can put a linux system into a jail and start that). If someone would like to work on that, a good start would be to run the Linux Test Project tests via the linuxulator and start fixing bugs. The last time I did that was in 2007 and about 16% of the test cases failed back then. It would be also quite nice if we could integrate those linuxulator tests into the FreeBSD CI. With improvements in the linuxulator and the above mentioned virtualization support, we should be able to run more/those linux-images … ehrm, sorry, docker/kubernetes/…-containers within the linuxulator.

Finish the work regarding kerberos in base. Cy Schubert is/was working on this. I do not know the status of this, but the “fixing 800 ports” part in his mail from May 2018 looks like some more helping hands would be beneficial. This would bring us to a better starting point for a more seamless integration (some ports need “the other” kerberos).

We have one port (as far as I was able to determine… the exact amount does not really matter) in terms of SDN – net/openvswitch – but the documentation of this … leaves room for improvement (kernel support / netmap support and functionality / FreeBSD specific HOWTO). As part of the virtualisation of everything we (yes: we – as part of the FreeBSD handbook, see docs category below for more on this) need to provide this info so that FreeBSD is able to participate in this area. We should also have a look at porting some more SDN software, e.g OpenContrail now tungstenfabric (there is an old contrail porting project), OpenStack Neutron, OpenDaylight, … so that users have a choice, respectively FreeBSD can be integrated into existing heterogeneous environments. 

Sensors (temperature, voltage, fans, …), a topic with history. Short: in the Google Summer of Code 2007 code was produced, committed, and then removed again due to a dispute. My personal understanding (very simplified) is “remove everything because some of the data handled by this framework shall not be handled by this framework” (instead of e.g. “remove sensor X, this data shall not be handled in this way”), and “remove everything as this does not handle sensors of type X which are not used in servers but in enterprise class >99% non-IT-related sensors”. Nothing better has shown up since then. If I look at Window, VMware, Solaris and Linux, I can query sensors on my mainboard/chassis/disks/whatever (yes, I am mixing some apples with oranges here), plot them in monitoring systems, and get alarms. In FreeBSD we fail on this topic (actually multiple topics) which I consider to be something basic and mandatory. I do not suggest that we commit the Google Summer of Code 2007 code. I suggest to have a look at what makes sense to do here. Take the existing code and commit it, or improve on this code outside the tree and then commit it, or write something new. In the end it does not matter (for an user) which way it is handled, as long as we have something which users can use in the end. It surely makes sense to have an OS-provided framework of registering sensors in a central place (it would surely be nice if you could get the temp/fan values of your graphics card… ooops… sorry… AI/HPC accelerator together with other similar hardware data in your hardware).

To continue playing well (not only) in the high-availability area, we should also have a look at getting an implementation of MPTCP (Mutlipath TCP) into the tree. Apple (and others) is already using it since 2013 with good benefits (most probably not only for Siri users). There exists some code for FreeBSD, but it is far from usable and it does not look like there is progress since 2016. We say we have the power to serve, but with the cloudification of the recent years, all users expect that everything is always-on and never fails, and being able to provide the server side of this client-server related technology for those people which have such high demands is necessary to not fall behind (do not let us rest on our laurels).

SecureBoot needs also some helping hands. At some point operating systems which do not support it will not be considered by companies anymore.

Another item we should have a look at is to provide means to write kernel code in different languages. Not in the base system, but at least in ports. If someone wants to write a kernel module in  C++ or Rust, why not? It offers possibilities to explore new areas. There are even reports of experiences with different languages. It does not fit your needs? Well, ignore it and continue writing kernel code in C, but let other people which want to use a screwdriver instead of a hammer do what they want, they will either learn that they should have used a hammer, or can report about benefits about the screwdriver.

Docs

I think we can improve our end-user docs to the next level. The base system is already well covered (we can surely find some features which we could document), but an user does not use FreeBSD to use FreeBSD. An user surely has a goal in mind which requires to setup some kind of service (mail server, web server, display server (desktop system), …). While one could argue that it is the 3rd party project which needs to document how to run their software on FreeBSD, I think we need to do our share here too. There are a lot of HOWTOs for Linux, and then you have to find some tips and tricks to make something work (better) on FreeBSD. What I have in mind here is that we should document how to make FreeBSD participate in a Windows Active Directory environment, or in an LDAP environment (as a client), improve the Samba part with FreeBSD specific parts (like how to make Samba use ZFS snapshots for Windows Shadow Copies), configuration management tools and so on. I do not talk about providing in-depth docs about the 3rd party software, but little HOWTOs with FreeBSD specific parts / tips and tricks, and a reference to the 3rd party docs. People come to us for real-world needs and if we provide them with a head-start of the most common items (e.g. also covering nginx or whatever and not only apache httpd) and then guide them to further docs will improve the value of our handbook even more for end-users (specially for newcomers, but also for experienced FreeBSD users which out of a sudden now need to do something which they never did before…).

We should also review our docs. The handbook lists e.g. procmail (just an example…). With procmail not being mainained anymore since a long time and known vulnerabilities we should replace the info there with info about maildrop (or any suitable replacement). Careful review may also find similar items which need some care.

One more item I have in mind in terms of docs for user is the restructuring of some parts. Now the world is more thinking in terms of XaaS (“something as a service”) we should also have a “cloud” section (going beyond of what we have in terms of virtualization already) in out handbook. We can put there items like the existing description of virtualisation items, but also should put there new items like glusterfs or object storage or the hopefully upcoming possibility of how to setup OpenStack/kubernetes/… on FreeBSD. This goes into the same direction as the first docs-item in terms of provide more documentation how to achieve goals of our users. 

In my opinion we are also lacking on the developer-documentation side. Yes, we have man pages which describe the official API (in most cases). Where I see room for improvement is the source code documentation. Something like doxygen (or whatever the tool of the day is – which one does not really matter, any kind of extractable-from-source documentation is better than no extractable documentation) is already used in several places in our source (search for it via: egrep ‑R ‘\\(brief|file)’ /usr/src/) and we have already some infrastructure to extract and render (HTML / PDF) them. The more accessible / easy it is to start development in FreeBSD, the more attractive it will be (additional to the existing benefits) to people / companies to dive in. The best examples about documenting source code in our code I have found so far is the isci and ocs_fc device code.

Polishing

Polishing something in the topic of staying relevant? Yes! It is the details which matter. If people have 2 options with roughly the same features (nothing missing what you need, same price), which one do they take, the one which has everything consistent and well integrated, or the one with some quirks you can circumvent with a little bit of work on their side?

We have some nice features, but we are not using it to the extend possible. One of the items which come to my mind is DTrace. The area which I think needs polishing is to add more probes, and to have some kind of probe-convention about common topics. For example I/O related naming convention (maybe area specific, like storage I/O and network I/O) and covering all drivers to comply. We should also look into making it more accessible by providing more easy interfaces (no matter if text based (thanks to Devin Teske for dwatch, more of this magic please…), web based, or whatever) to make it really easy (= start a command or click around and you get the result for a specific set of probes/conditions/…). Some examples are statemaps, flamegraphs and most prominently the Oracle/Sun ZFS Storage Analytics to give you an idea what is possible with DTrace and and how to make it accessible to people without knowledge about the kernel internals and programming.

Some polishing in the ports collection would be to revisit the defaults options for ports with options. The target here should be to have consistent default settings (e.g. server software should not depend upon X11 by default (directly or indirectly), most people should not need to build the port with non-default options). One could argue that it is the responsability of the maintainer of the port, and to some extend it is, but we do not have guidelines which help here. So a little team of people to review all ports (and modify them if necessary) and come up with guidelines and examples would be great.

Additionally we should come up with meta-ports for specific use case, e.g. webserver (different flavours… apache/nginx/…), database (different flavours, with some useful tools like mytop or mysqltuner or similar) and then even reference them in the handbook (this goes along with my suggestion above to document real-world use cases instead of “only” the OS itself).

Recently ‑current has seen some low level performance improvements (via ifuncs, …). We should continue this and even extend it to revise default settings / values (non auto-tuned and even auto-tuned ones). I think it would be beneficial in the long run if we target more current hardware (without losing the ability to run on older hardware) and for those values which can not be auto-tuned provide some way of down-tuning (e.g. in a failsafe-boot setting in the loader or in documented settings for rc.conf or wherever those defaults can be changed).

Project infrastructure

We have a CI (continuous integration) system, but it is not very prominently placed. Just recently it gained some more attention from the developer side and we even got the first status report about it (nice! visibility helps making it a part of the community effort). There is a FreeBSD-wiki page about the status and the future ideas, but it was not updated since several months. There is also a page which talks in more details about using it for performance testing, which is something people have talked about since years but never became available (and is not today).

I think we need to improve here. The goals I think which are important are to get various testing, sanitizing and fuzzing technologies integrated into our CI. On the config repository I have not found any integration of e.g. the corresponding clang technologies (fuzzing, ASAN, UBSAN, MSAN (still experimental, so maybe not to target before other mature technologies)) or any other such technology.

We should also make our CI more public/visible (build status linked somewhere on www.freebsd.org, nag people more about issues found by it, have some docs how to add new tests (maybe from ports), so that more people can help in extend what we automatically test (e.g. how could I integrate the LTP (Linux Test Project) tests to test our linuxulator? This requires the download of a linux dist port, the LTP itself, and then to run the tests). There are a lot of nice ideas floating around, but I have the impression we are lacking some helping hands to get various items integrated.

Wrap-Up

Various items I talked above are not sexy. Those are typically not the things people do just for fun. Those are typically items people get paid for. It would be nice if some of the companies which benefit from FreeBSD would be so nice to lend a helping hand for one or another item. Maybe the FreeBSD Foundation has some contacts they could ask about this?

It could also be that for some of the items I mentioned here there is more ongoing that I know of. This means then that the corresponding work could be made more known on the mailinglists. When it is more known, maybe someone wants to provide a helping hand.

Send to Kindle

Share/Save

by netchild at January 27, 2019 09:18 PM

October 22, 2018

Dag-Erling Smørgrav

DNS over TLS in FreeBSD 12

With the arrival of OpenSSL 1.1.1, an upgraded Unbound, and some changes to the setup and init scripts, FreeBSD 12.0, currently in beta, now supports DNS over TLS out of the box.

DNS over TLS is just what it sounds like: DNS over TCP, but wrapped in a TLS session. It encrypts your requests and the server’s replies, and optionally allows you to verify the identity of the server. The advantages are protection against eavesdropping and manipulation of your DNS traffic; the drawbacks are a slight performance degradation and potential firewall traversal issues, as it runs over a non-standard port (TCP port 853) which may be blocked on some networks. Let’s take a look at how to set it up.

Basic setup

As a simple test case, let’s set up our 12.0-ALPHA10 VM to use Cloudflare’s DNS service:

# uname -r
12.0-ALPHA10
# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="YES"
local_unbound_forwarders="1.1.1.1@853 1.0.0.1@853"
EOF
# service local_unbound start
Performing initial setup.
destination:
/var/unbound/forward.conf created
/var/unbound/lan-zones.conf created
/var/unbound/control.conf created
/var/unbound/unbound.conf created
/etc/resolvconf.conf not modified
Original /etc/resolv.conf saved as /var/backups/resolv.conf.20181021.192629
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
www.freebsd.org is an alias for wfe0.nyi.freebsd.org.
wfe0.nyi.freebsd.org has address 96.47.72.84
wfe0.nyi.freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
wfe0.nyi.freebsd.org mail is handled by 0 .

Note that this is not a configuration you want to run in production—we will come back to this later.

Performance

The downside of DNS over TLS is the performance hit of the TCP and TLS session setup and teardown. We demonstrate this by flushing our cache and (rather crudely) measuring a cache miss and a cache hit:

# local-unbound-control reload
ok
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.553 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.005 total

Compare this to querying our router, a puny Soekris net5501 running Unbound 1.8.1 on FreeBSD 11.1-RELEASE:

# time host www.freebsd.org gw >x
host www.freebsd.org gw > x 0.00s user 0.00s system 0% cpu 0.232 total
# time host www.freebsd.org 192.168.144.1 >x
host www.freebsd.org gw > x 0.00s user 0.00s system 0% cpu 0.008 total

or to querying Cloudflare directly over UDP:

# time host www.freebsd.org 1.1.1.1 >x      
host www.freebsd.org 1.1.1.1 > x 0.00s user 0.00s system 0% cpu 0.272 total
# time host www.freebsd.org 1.1.1.1 >x
host www.freebsd.org 1.1.1.1 > x 0.00s user 0.00s system 0% cpu 0.013 total

(Cloudflare uses anycast routing, so it is not so unreasonable to see a cache miss during off-peak hours.)

This clearly shows the advantage of running a local caching resolver—it absorbs the cost of DNSSEC and TLS. And speaking of DNSSEC, we can separate that cost from that of TLS by reconfiguring our server without the latter:

# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="NO"
local_unbound_forwarders="1.1.1.1 1.0.0.1"
EOF
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.205328
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
Original /var/unbound/unbound.conf saved as /var/backups/unbound.conf.20181021.205328
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound start
Starting local_unbound.
Waiting for nameserver to start... good
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.080 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total

So does TLS add nearly half a second to every cache miss? Not quite, fortunately—in our previous tests, our first query was not only a cache miss but also the first query after a restart or a cache flush, resulting in a complete load and validation of the entire path from the name we queried to the root. The difference between a first and second cache miss is quite noticeable:

# time host www.freebsd.org >x 
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.546 total
# time host www.freebsd.org >x
host www.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total
# time host repo.freebsd.org >x
host repo.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.168 total
# time host repo.freebsd.org >x
host repo.freebsd.org > x 0.00s user 0.00s system 0% cpu 0.004 total

Revisiting our configuration

Remember when I said that you shouldn’t run the sample configuration in production, and that I’d get back to it later? This is later.

The problem with our first configuration is that while it encrypts our DNS traffic, it does not verify the identity of the server. Our ISP could be routing all traffic to 1.1.1.1 to its own servers, logging it, and selling the information to the highest bidder. We need to tell Unbound to validate the server certificate, but there’s a catch: Unbound only knows the IP addresses of its forwarders, not their names. We have to provide it with names that will match the x509 certificates used by the servers we want to use. Let’s double-check the certificate:

# :| openssl s_client -connect 1.1.1.1:853 |& openssl x509 -noout -text |& grep DNS
DNS:*.cloudflare-dns.com, IP Address:1.1.1.1, IP Address:1.0.0.1, DNS:cloudflare-dns.com, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:1001

This matches Cloudflare’s documentation, so let’s update our configuration:

# cat >/etc/rc.conf.d/local_unbound <<EOF
local_unbound_enable="YES"
local_unbound_tls="YES"
local_unbound_forwarders="1.1.1.1@853#cloudflare-dns.com 1.0.0.1@853#cloudflare-dns.com"
EOF
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.212519
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
www.freebsd.org is an alias for wfe0.nyi.freebsd.org.
wfe0.nyi.freebsd.org has address 96.47.72.84
wfe0.nyi.freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
wfe0.nyi.freebsd.org mail is handled by 0 .

How can we confirm that Unbound actually validates the certificate? Well, we can run Unbound in debug mode (/usr/sbin/unbound -dd -vvv) and read the debugging output… or we can confirm that it fails when given a name that does not match the certificate:

# perl -p -i -e 's/cloudflare/cloudfire/g' /etc/rc.conf.d/local_unbound
# service local_unbound setup
Performing initial setup.
destination:
Original /var/unbound/forward.conf saved as /var/backups/forward.conf.20181021.215808
/var/unbound/lan-zones.conf not modified
/var/unbound/control.conf not modified
/var/unbound/unbound.conf not modified
/etc/resolvconf.conf not modified
/etc/resolv.conf not modified
# service local_unbound restart
Stopping local_unbound.
Waiting for PIDS: 33977.
Starting local_unbound.
Waiting for nameserver to start... good
# host www.freebsd.org
Host www.freebsd.org not found: 2(SERVFAIL)

But is this really a failure to validate the certificate? Actually, no. When provided with a server name, Unbound will pass it to the server during the TLS handshake, and the server will reject the handshake if that name does not match any of its certificates. To truly verify that Unbound validates the server certificate, we have to confirm that it fails when it cannot do so. For instance, we can remove the root certificate used to sign the DNS server’s certificate from the test system’s trust store. Note that we cannot simply remove the trust store entirely, as Unbound will refuse to start if the trust store is missing or empty.

While we’re talking about trust stores, I should point out that you currently must have ca_root_nss installed for DNS over TLS to work. However, 12.0-RELEASE will ship with a pre-installed copy.

Conclusion

We’ve seen how to set up Unbound—specifically, the local_unbound service in FreeBSD 12.0—to use DNS over TLS instead of plain UDP or TCP, using Cloudflare’s public DNS service as an example. We’ve looked at the performance impact, and at how to ensure (and verify) that Unbound validates the server certificate to prevent man-in-the-middle attacks.

The question that remains is whether it is all worth it. There is undeniably a performance hit, though this may improve with TLS 1.3. More importantly, there are currently very few DNS-over-TLS providers—only one, really, since Quad9 filter their responses—and you have to weigh the advantage of encrypting your DNS traffic against the disadvantage of sending it all to a single organization. I can’t answer that question for you, but I can tell you that the parameters are evolving quickly, and if your answer is negative today, it may not remain so for long. More providers will appear. Performance will improve with TLS 1.3 and QUIC. Within a year or two, running DNS over TLS may very well become the rule rather than the experimental exception.

by Dag-Erling Smørgrav at October 22, 2018 09:36 AM

August 13, 2018

Alexander Leidinger

Essen Hackathon 2018

Again this time of the year where we had the pleasure of doing the Essen Hackathon in a nice weather condition (sunny, not too hot, no rain). A lot of people here, about 20. Not only FreeBSD committers showed up, but also contributors (biggest group was 3 people who work on iocage/libiocage, and some individuals with interest in various topics like e.g. SCTP / network protocols, and other topics I unfortunately forgot).

The topics of interest this year:

  • workflows / processes
  • Wiki
  • jail- / container management (pkgbase, iocage, docker)
  • ZFS
  • graphics
  • documentation
  • bug squashing
  • CA trust store for the base system

I was first working with Allan on moving forward with a CA trust store for the base system (target: make fetch work out of the box for TLS connections – currently you will get an error that the certificate can not validated, if you do not have the ca_nss_root port (or any other source of trust) installed and a symlink in base to the PEM file). We have investigated how base-openssl, ports-openssl and libressl are setup (ports-openssl is the odd one in the list, it looks in LOCALBASE/openssl for his default trust store, while we would have expected it would have a look in LOCALBASE/etc/ssl). As no ports-based ssl lib is looking into /etc/ssl, we were safe to do whatever we want in base without breaking the behavior of ports which depend upon the ports-based ssl libs. With that the current design is to import a set of CAs into SVN – one cert file per CA – and a way to update them (for the security officer and for users), blacklist CAs, and have base-system and local CAs merged into the base-config. The expectation is that Allan will be able to present at least a prototype at EuroBDCon.

I also had a look with the iocage/libiocage developers at some issues I have with iocage. The nice thing is, the current version of libiocage already solves the issue I see (I just have to change my processes a little bit). Some more cleanup is needed on their side until they are ready for a port of libiocage. I am looking forward to this.

Additionally I got some time to look at the list of PRs with patches I wanted to look at. Out of the 17 PRs I toke note of, I have closed 4 (one because it was overcome by events). One is in progress (committed to ‑current, but I want to MFC that). One additional one (from the iocage guys) I forwarded to jamie@ for review. I also noticed that Kristof fixed some bugs too.

On the social side we had discussions during BBQ, pizza/pasta/…, and a restaurant visit. As always Kristof was telling some funny stories (or at least telling stories in a funny way… 😉 ). This off course triggered some other funny stories from other people. All in all my bottom line of this years Essen Hackathon is (as for the other 2 I visited): fun, sun and progress for FreeBSD.

By bringing cake every time I went there, it seems that I created a tradition of this. So anyone should already plan to register for the next one – if nothing bad happens, I will bring cake again.

Send to Kindle

Share/Save

by netchild at August 13, 2018 07:46 PM

April 09, 2018

Dag-Erling Smørgrav

Twenty years

Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today.

My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD.

My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year.

In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices.

Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends.

For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it.

I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon.

Previously

by Dag-Erling Smørgrav at April 09, 2018 08:35 PM

February 05, 2018

Remko Lodder

Reponse zones in BIND (RPZ/Blocking unwanted traffic).

A while ago, my dear colleague Mattijs came with an interesting option in BIND. Response zones. One can create custom "zones" and enforce a policy on that.

I never worked with it before, so I had no clue at all what to expect from it. Mattijs told me how to configure it (see below for an example) and offered to slave his RPZ policy-domains.

All of a sudden I was no longer getting a lot of ADS/SPAM and other things. It was filtered. Wow!

His RPZ zones were custom made and based on PiHole, where PiHole adds hosts to the local "hosts" file and sends it to 127.0.0.1 (your local machine), which prevents it to reach the actual server at all, RPZ policies are much stronger and more dynamic.

RPZ policies offer the use of "redirecting" queries. What do I mean with that? well you can force a ADVERTISEMENT (AD for short) site / domain to the RPZ policy and return a NXDOMAIN. It no longer exists for the end-user. But you can also CNAME it to a domain/host you own and then add a webserver to that host and tell the user query'ing the page: "The site you are trying to reach had been pro-actively blocked by the DNS software. This is an automated action and an automated response. If you feel that this is not appropriate, please let us know on <mail link>", or something like that.

Once I noticed that and saw the value, I immediately saw the benefit for companies and most likely schools and home people. Mattijs had a busy time at work and I was recovering from health issues, so I had "plenty" of time to investigate and read on this. The RPZ policies where not updated a lot and caused some problems for my ereaders for example (msftcncsi.com was used by them, see another post on this website for being grumpy about that). And I wanted to learn more about it. So what did I do?

Yes, I wrote my own parser. In perl. I wrote a "rpz-generator" (its actually called like that). I added the sources Mattijs used and generated my own files. They are rather huge, since I blocked ads, malware, fraud, exploits, windows stuff and various other things (gambling, fakenews, and stuff like that).

I also included some whitelists, because msfctinc was added to the lists and it made my ereaders go beserk, and we play a few games here and there which uses some advertisement sites, so we wanted to exempt them as well. It's better to know which ones they are and selectively allow them, then having traffic to every data collector out there.

This works rather well. I do not get a lot of complaints that things are not working. I do see a lot of queries going to "banned" sites everyday. So it is doing something .The most obvious one is that search results on google, not always are clickable. The ones that have those [ADV] sites, are blocked because they are advertising google sponsored sites, and they are on the list.. and google-analytics etc. It doesn't cause much harm to our internet surfing or use experience, with the exception of the ADV sites I just mentioned. My wife sometimes wants to click on those because she searches for something that happends to be on that list, but apart from that we are doing just fine.

One thing though, I wrote my setup and this article with my setup using "NXDOMAIN" which just gives back "site does not exist" messages. I want to make my script more smart by making it a selectable, so that some categories are CNAMED to a filtering domain and webpage, and some are NXDOMAIN'ed. If someone has experience with that, please show me some idea's and how that looks like and whether your end-users can do something with it or not. I think schools will be happy to present a block-page instead of NXdomain'ing some sites 🙂

Acknowledgements: Mattijs for teaching and showing me RPZ, ISC for placing RPZ in NAMED, and zytrax.com for having such excellent documentation to RPZ. The perl developers for having such a great tool around, and the various sites I use to get the blocklists from. Thank you all!

If you want to know more about the tool, please contact me and we can share whatever information is available 🙂

by Remko Lodder at February 05, 2018 11:09 PM

November 25, 2017

Erwin Lansing

120. Red ale

5,5kg Maris Otter
500g Crystal 60L
450g Munich I
100g Chocolate malt

Mash for 75 minutes at 65°C

30g Cascade @ 60 min.
30g Centennial @60 min.
30g Cascade @ 10 min.
30g Centennial @ 10 min.

Bottled at January 7, 2018 with 150g table sugar

White Labs WLP001 California ale yeast
OG: 1.052
FG: 1.006
ABV: 6,0%

The post 120. Red ale appeared first on Droso.

by erwin at November 25, 2017 05:34 PM

September 10, 2017

Erwin Lansing

119. Dunkel Weizen

2,5kg Munich II
2,5kg Dark wheat malt
0,2kg Special B
0,10kg Carafa II

Mash for 60 min at 65°C

30g Hallertauer (4%) for 90 min.

Bottled on November 5, with 120g table sugar

White Labs WLP300 Hefeweizen
OG: 1.055
FG: 1.012
ABV: 5,6%

The post 119. Dunkel Weizen appeared first on Droso.

by erwin at September 10, 2017 01:28 PM

August 29, 2017

Remko Lodder

FreeBSD: Using Open-Xchange on FreeBSD

If you go looking for a usable webmail application, then you might end up with Open-Xchange (OX for short). Some larger ISP's are using OX as their webmail application for customers. It has a multitude of options available, using multiple email accounts, caldav/carddav included (not externally (yet?)) etc. There are commercial options available for these ISP's, but also for smaller resellers etc.

But, there is also the community edition available. Which is the installation you can run for free on your machine(s). It does not have some of the fancy modules that large setups need and require, and some updates might follow a bit later which are more directly delivered to paying customers, but it is very complete and usable.

I decided to setup this for my private clients who like to use a webmail client to access their email. At first I ran this on a VM using Bhyve on FreeBSD. The VM ran on CentOS6 and had the necessary bits installed for the OX setup (see: https://oxpedia.org/wiki/index.php?title=AppSuite:Open-Xchange_Installation_Guide_for_CentOS_6). I modified the files I needed to change to get this going, and there, it just worked. But, running on a VM, with ofcourse limited CPU and Memory power assigned (There is always a cap) and it being emulated, I was not very happy with it. I needed to maintain an additional installation and update it, while I have this perfectly fine FreeBSD server instead. (Note that I am not against using bhyve at all, it works very well, but I wanted to reduce my maintenance base a bit :-)).

So a few days ago I considered just moving the stuff over to the FreeBSD host instead. And actually it was rather trivial to do with the working setup on CentOS.

At this moment I do not see an easy way to get the source/components directly from within FreeBSD. I have asked OX for help on this, so that we can perhaps get this sorted out and perhaps even make a Port/pkg out of this for use with FreeBSD.

The required host changes and software installation

The first thing that I did was to create a zfs dataset for /opt. The software is normally installed there, and in this case I wanted to have a contained location which I can snapshot, delete, etc, without affecting much of the normal system. I copied over the /opt/open-xchange directory from my CentOS installation. I looked at the installation on CentOS and noticed that it used a specific user 'open-xchange', which I created on my FreeBSD host. I changed the files to be owned by this user. Getting a process listing on the CentOS machine also revealed that it needed Java/JDK. So I installed the openjdk8 pkg (''pkg install openjdk8''). The setup did not yet start, there were errors about /bin/bash missing. Obviously that required installing bash (''pkg install bash'') and you can go with two ways, you can alter every shebang (#!) to match /usr/local/bin/bash (or better yet #!/usr/bin/env bash), or you can symlink /usr/local/bin/bash to /bin/bash, which is what I did (I asked OX to make it more portable by using the env variant instead).

The /var/log/open-xchange directory does not normally exist, so I created that and made sure that ''open-xchange'' could write to that. (mkdir /var/log/open-xchange && chown open-xchange /var/log/open-xchange).

I was able to startup the /opt/open-xchange/sbin/open-xchange process with that. I could not yet easily reach it, on the CentOS installation there are two files in the Apache configuration that needed some attention on my FreeBSD host. The Apache include files: ox.conf and proxy_http.conf will give away hints about what to change. In my case I needed to do the redirect on the Vhost that runs OX (RedirectMatch ^/$ /appsuite/) and make sure the /var/www/html/appsuite directory is copied over from the CentOS installation as well. You can stick it in any location, as long as you can reach it with your webuser and Alias it to the proper directory and setup directory access).

Apache configuration (Reverse proxy mode)

The proxy_http.conf file is more interesting, it includes the reverse proxy settings to be able to connect to the java instance of OX and service your clients. I needed to add a few modules in Apache so that it could work, I already had several proxy modules enabled for different reasons, so the list below can probably be trimmed a bit to the exact modules needed, but since this works for me, I might as well just show you;

LoadModule slotmem_shm_module libexec/apache24/mod_slotmem_shm.so
LoadModule deflate_module libexec/apache24/mod_deflate.so
LoadModule expires_module libexec/apache24/mod_expires.so
LoadModule proxy_module libexec/apache24/mod_proxy.so
LoadModule proxy_connect_module libexec/apache24/mod_proxy_connect.so
LoadModule proxy_http_module libexec/apache24/mod_proxy_http.so
LoadModule proxy_scgi_module libexec/apache24/mod_proxy_scgi.so
LoadModule proxy_wstunnel_module libexec/apache24/mod_proxy_wstunnel.so
LoadModule proxy_ajp_module libexec/apache24/mod_proxy_ajp.so
LoadModule proxy_balancer_module libexec/apache24/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module libexec/apache24/mod_lbmethod_byrequests.so
LoadModule lbmethod_bytraffic_module libexec/apache24/mod_lbmethod_bytraffic.so
LoadModule lbmethod_bybusyness_module libexec/apache24/mod_lbmethod_bybusyness.so

After that it was running fine for me. My users can login to the application and the local directory's are being used instead of the VM which ran it first. If you notice previous documentation on this subject, you will notice that there are more third party packages needed at that time. It could easily be that there are more modules needed than that I wrote about. My setup was not clean, the host already runs several websites (one of them being this one) and ofcourse support packages were already installed.

Updating is currently NOT possible. The CentOS installation requires running ''yum update'' periodically, but that is obviously not possible on FreeBSD. The packages used within CentOS are not directly usable for FreeBSD. I have asked OX to provide the various Community base and optional modules as .tar.gz files (raw) so that we can fetch them and install them on the proper location(s). As long as the .js/.jar files etc are all there and the scripts are modified to start, it will just work. I have not (yet) created a startup script for this yet. For the moment I will just start the VM and see whether there are updates and copy them over instead. Since I did not need to do additional changing on the main host, it is a very easy and straight forward process in this case.

Support

There is no support for OX on FreeBSD. Ofcourse I would like to see at least some support to promote my favorite OS more, but that is a financial situation. It might not cost a lot to deliver the .tar.gz files so that we can package them and spread the usage of OX on more installations (and thus perhaps add revenue for OX as commercial installation), but it will cost FTE's to support more then that. If you see a commercial opportunity, please let them know so that this might be more and more realistic.

The documentation written above is just how I have setup the installation and I wanted to share it with you. I do not offer support on it, but ofcourse I am willing to answer questions you might have about the setup etc. I did not include the vhost configuration in it's entirely, if that is a popular request, I will add it to this post.

Open Questions to OX:

So as mentioned I have questioned OX for some choices:

  • Please use a more portable path for the Bash shell (#!/usr/bin/env bash)
  • Please allow the use of a different localbase (/usr/local/open-xchange for example)
  • Please allow FreeBSD packagers to fetch a "clean" .tar.gz, so that we can package this for OX and distribute it for our end-users.
  • Unrelated to the post above: Please allow the usage of external caldav/carddav providers

Edit:

I have found another thing that I needed to change. I needed to use gsed (Gnu-sed) instead of FreeBSD-sed so that the listuser scripts work. Linux does that a bit differently but if you replace sed with gsed those scripts will work fine.

I have not yet got some feedback from OX.

by Remko Lodder at August 29, 2017 07:48 AM

April 11, 2017

Eric Anholt

This week in vc4 (2017-04-10): dmabuf fencing, meson

The big project for the last two weeks has been developing dmabuf fencing support for vc4.  Without dmabuf fences, when passing buffers between devices the user needs to manually wait for the job to finish on one (say, camera snapshot) before letting the other device get started (accumulating GL commands to texture from the camera snapshot).  That means leaving both devices idle for a moment while the CPU accumulates the command stream for the consumer, but the bigger pain is that it requires that the end user manage the synchronization.

With dma-buf fencing in the kernel, a "reservation object" generated by the dma-buf exporter tracks the fences of the various devices using the shared object, and then the device trivers get to look at that list and wait on on each others' fences when using it.

So far, I've got my reservations and fences being exported from vc4, so that pl111 display can wait for vc4 to be done before actually putting a new pageflip up on the screen.  I haven't quite hooked up the other direction, for camera capture into vc4 display or GL texturing (I don't have a testcase for this, as the current camera driver doesn't expose dmabufs), but it shouldn't be hard.

On the meson front, rendercheck is now converted to meson upstream.  I've made more progress on the X Server:  Xorg is now building, and even successfully executes Xorg -pogo with the previous modesetting driver in place.  The new modesetting driver is failing mysteriously.  With a build hack I got from the meson folks and some work from ajax, the sdksyms script I complained about in my last post isn't used at all on the meson build.  And, best of all, the meson devs have written the code needed for us to not even need the build hack I'm using.

It's so nice to be using a build system that's an actual living software project.

by anholt at April 11, 2017 12:48 AM

March 27, 2017

Eric Anholt

This week in vc4 (2017-03-27): Upstream PRs, more CMA, meson

Last week I sent pull requests for bcm2835 changes for 4.12.  We've got some DT updates for HDMI audio, DSI, and SDHOST, and defconfig changes to enable SDHOST.  The DT changes to actually enable SDHOST (and get wifi working on Pi3) won't land until 4.13.

I also wrote a patch to enable using more than 256MB of CMA memory (and not require any particular alignment).  The 256MB limit was due to a hardware bug: the binner's memory allocations get dereferenced with their top 4 bits set to the top 4 bits of the tile state data array's address.  Given that tile state allocations happen after CL setup (while the binner is running and throwing overflow interrupts), there was no way to guarantee that we could find overflow memory with the top bits matching.

The new solution, suggested by someone from the set top box group, is to allocate a single 16MB to 32MB buffer at HW init time, and return all of those types of allocations out of it, since it turns out you don't need much to complete rendering of any given scene.  I've been mulling over the details of a solution for a while, and finally wrote and tested the patch I wanted (tricky parts included freeing the memory when the hardware was idle, and how to track the lifetimes of the sub-allocations).  Results look good, and I'll be submitting it this week.

However, I spent most of the week on converting the X Server over to meson

Meson is a delightful new build system (based around Ninja on Linux) that massively speeds up builds, while also being portable to Windows (unlike autotools generally).  If you've ever tried to build the X stack on Raspberry Pi, you know that autotools is painfully slow.  It's also been the limiting factor for me in debugging my scripts for CI for the X Server -- something we'd really like to be doing as we hack on glamor or do refactors in the core.

So far all I've landed in this project is code deletion, as I find build options that aren't hooked up to anything, or code that isn't hooked up to build options.  This itself will speed up our builds, and ajax has been working in parallel on deleting a bunch of code that makes the build messier than it needs to be.  I've also submitted patches for rendercheck converting to meson (as a demo of what the conversion looks like), and I have Xephyr, Xvfb, Xdmx, and Xwayland building in the X Server with meson.

So far the only stumbling block for the meson conversion of the X Server is the X.org sdksyms.c file.  It's the ugliest part of the build -- running the C preprocessor on a generated .c that #includes a bunch of .h tiles, then running the output of that through awk and trying to parse C using regular expressions.  This is, as you might guess, somewhat fragile.

My hope for a solution to this is to just quit generating sdksyms.c entirely.  Using ELF sections, we can convince the linker to not garbage collect symbols that it thinks are unused.  Then we get to just decorate symbols with XORG_EXPORT or XORG_EXPORT_VAR (unfortunately have to have separate sections for RO vs RW contents), and Xorg will have the correct set of symbols exported.  I started on a branch for this, ajax got it actually building, and now we just need to bash the prototypes so that the same set of symbols are exported before/after the conversion.

by anholt at March 27, 2017 10:43 PM

October 25, 2016

Murray Stokely

FreeBSD on Intel NUCs

I've been away from FreeBSD for a few years but I wanted some more functionality on my home network that I was able to configure with my Synology NAS and router. Specifically, I wanted:

  • a configurable caching name server that would serve up authoritative private names on my LAN and also validates responses with DNSSEC.
  • a more configurable DHCP server so I could make the server assign specific IPs to specific MAC addresses.
  • more compute power for transcoding videos for Plex.

Running FreeBSD 11 on an Intel NUC seemed like an ideal solution to keep my closet tidy. As of this week, $406.63 on Amazon buys a last generation i3 Intel NUC mini PC (NUC5I3RYH), with 8GB of RAM and 128GB of SSD storage. This was the first model I tried since I found reports of others using this with FreeBSD online, but I was also able to get it working on the newer generation i5 based NUC6i5SYK with 16GB of RAM and 256GB of SSD. The major issue with these NUCs is that the Intel wireless driver is not supported in FreeBSD. I am not doing anything graphical with these boxes so I don't know how well the graphics work, but they are great little network compute nodes.

Installation

I downloaded the FreeBSD 11 memory stick images, and was pleased to see that the device booted fine off the memory stick without any BIOS configuration required. However, my installation failed trying to mount root ("Mounting from ufs:/dev/ufs/FreeBSD_Install failed with error 19."). Installation from an external USB DVD drive and over the network with PXE both proved more successful at getting me into bsdinstaller to complete the installation.

I partitioned the 128GB SSD device with 8GB of swap and the rest for the root partition (UFS, Journaled and Soft Updates). After installation I edited /etc/fstab to add a tmpfs(5) mount for /tmp. The dmesg output for this host is available in a Gist on Github.

Warren Block's article on SSD on FreeBSD and the various chapters of the FreeBSD Handbook were helpful. There were a couple of tools that were also useful in probing the performance of the SSD with my FreeBSD workload:

  • The smartctl tool in the sysutils/smartmontools package allows one to read detailed diagnostic information from the SSD, including wear patterns.
  • The basic benchmark built into diskinfo -t reports that the SSD is transferring 503-510MB/second.
But how well does it perform in practice?

Rough Benchmarks

This post isn't meant to report a comprehensive suite of FreeBSD benchmarks, but I did run some basic tests to understand how suitable these low power NUCs perform in practice. To start with, I downloaded the 11-stable source from Subversion and measured the build times to understand performance of the new system. All builds were done with a minimal 2 line make.conf:


MALLOC_PRODUCTION=yes
CPUTYPE?=core2

Build Speed

Build CommandEnvironmentReal Times
make -j4 buildkernel/usr/src and /usr/obj on SSD10.06 minutes
make -j4 buildkernel/usr/src on SSD, /usr/obj on tmpfs9.65 minutes
make -j4 buildworld/usr/src and /usr/obj on SSD1.27 hours
make buildworld/usr/src and /urs/obj on SSD3.76 hours

Bonnie

In addition to the build times, I also wanted to look more directly at the performance reading from flash and reading from the NFS mounted home directories on my 4-drive NAS. I first tried Bonnie++, but then ran into a 13-year old bug in the NFS client of FreeBSD. After switching to Bonnie, I was able to gather some reasonable numbers. I had to use really large file sizes for the random write test to eliminate most of the caching that was artificially inflating the results. For those that haven't seen it, Brendan Gregg's excellent blog post highlights some of the issues of file system benchmarks like Bonnie.


Average of 3 bonnie runs with 40GB block size
ConfigurationRandom I/OBlock InputBlock Output
Seeks/SecCPU UtilizationReads/secCPU UtilizationWrites/secCPU Utilization
NFS99.20.91065054.8899667.5
SSD880913.553867125.3316091711.3

The block input rates from my bonnie benchmarks on the SSD were within 5% of the value provided by the much quick and dirtier diskinfo -t test.

Running Bonnie with less than 40GB file size yielded unreliable benchmarks due to caching at the VM layer. The following boxplot shows the random seek performance during 3 runs each at 24, 32, and 40GB file sizes. Performance starts to even off at this level but with smaller file sizes the reported random seek performance is much higher.

Open Issues

As mentioned earlier, I liked the performance I got with running FreeBSD on a 2015-era i3 NUC5I3RYH so much that I bought a newer, more powerful second device for my network. The 2016-era i5 NUC 6i5SYK is also running great. There are just a few minor issues I've encountered so far:

  • There is no FreeBSD driver for the Intel Wireless chip included with this NUC. Code for other platforms exists but has not been ported to FreeBSD.
  • The memory stick booting issue described in the installation section. It is not clear if it didn't like my USB stick for some reason, or the port I was plugging into, or if additional boot parameters would have solved the issue. Documentation and/or code needs to be updated to make this clearer.
  • Similarly, the PXE Install instructions were a bit scattered. The PXE section of the Handbook isn't specifically targetting new manual installations into bsdinstall. There are a few extra things you can run into that aren't documented well or could be streamlined.
  • Graphics / X11 are outside of the scope of my needs. The NUCs have VESA mounts so you can easily tuck them behind an LCD monitor, but it is not clear to me how well they perform in that role.

by Murray (noreply@blogger.com) at October 25, 2016 03:27 AM

April 07, 2016

FreeBSD Foundation

Introducing a New Website and Logo for the Foundation


The FreeBSD Foundation is pleased to announce the debut of our new logo and website, signaling the ongoing evolution of the Foundation identity, and ability to better serve the FreeBSD Project. Our new logo was designed to not only reflect the established and professional nature of our organization, but also to represent the link between the Project and the Foundation, and our commitment to community, collaboration, and the advancement of FreeBSD.

We did not make this decision lightly.  We are proud of the Beastie in the Business Suit and the history he encompasses. That is why you’ll still see him make an appearance on occasion. However, as the Foundation’s reach and objectives continue to expand, we must ensure our identity reflects who we are today, and where we are going in the future. From spotlighting companies who support and use FreeBSD, to making it easier to learn how to get involved, spread the word about, and work within the Project, the new site has been designed to better showcase, not only how we support the Project, but also the impact FreeBSD has on the world. The launch today marks the end of Phase I of our Website Development Project. Please stay tuned as we continue to add enhancements to the site.

We are also in the process of updating all our collateral, marketing literature, stationery, etc with the new logo. If you have used the FreeBSD Foundation logo in any of your marketing materials, please assist us in updating them. New Logo Guidelines will be available soon. In the meantime, if you are in the process of producing some new literature, and you would like to use the new Foundation logo, please contact our marketing department to get the new artwork.

Please note: we've moved the blog to the new site. See it here.





by Anne Dickison (noreply@blogger.com) at April 07, 2016 04:40 PM

February 26, 2016

FreeBSD Foundation

FreeBSD and ZFS

ZFS has been making headlines lately, so it seems like the right time to talk about the longstanding relationship between FreeBSD and ZFS.

For nearly seven years, FreeBSD has included a production quality ZFS implementation, making it one of the key features of the FreeBSD operating system. ZFS is a combined file system and volume manager. Decoupling physical media from logical volumes allows free space to be efficiently shared between all of the file systems. ZFS introduced unprecedented data integrity and reliability guarantees to storage on FreeBSD. ZFS supports varying levels of redundancy for tolerance of hardware failures and includes cryptographic checksums on all data to guard against corruption.

Allan Jude, VP of Operations at ScaleEngine and coauthor of FreeBSD Mastery: ZFS, said “We started using ZFS in 2011 because we needed to safely store a huge quantity of video for our customers. FreeBSD was, and still is, the best platform for deploying ZFS in production. We now store more than a petabyte of video using ZFS, and use ZFS Boot Environments on all of our servers.”

So why does FreeBSD include ZFS and contribute to its continued development? FreeBSD community members understand the need for continued development work as technologies evolve. OpenZFS is the truly open source successor to the ZFS project and the FreeBSD Project has participated in OpenZFS since its founding in 2013. FreeBSD developers and those from Delphix, Nexenta, Joyent, the ZFS on Linux project, and the Illumos project work together to continue improving OpenZFS.

FreeBSD’s unique open source infrastructure, copyfree license, and engaged community support the integration of a variety of free software components, including OpenZFS. FreeBSD makes an excellent operating system for servers and end users, and it provides a foundation for many open source projects and commercial products.

We're happy that ZFS is available in FreeBSD as a fully integrated, first class file system and wish to thank all of those who have contributed to it over the years.

by Anne Dickison (noreply@blogger.com) at February 26, 2016 03:23 PM

February 20, 2016

Joseph Koshy

ELF Toolchain v0.7.1

I am pleased to announce the availability of version 0.7.1 of the software being developed by the ElfToolChain project.

This release offers:
  • Better support of the DWARF4 format.
  • Support for more machine architectures.
  • Many bug fixes and improvements.
The release also contains experimental code for:
  • A library handling the Portable Executable (PE) format.
  • A link editor.
The release may be downloaded from SourceForge:
https://sourceforge.net/projects/elftoolchain/files/Sources/elftoolchain-0.7.1/
Detailed release notes are available at the URL mentioned above.

Many thanks to the project's supporters for their contributions to the project.

by Joseph Koshy (noreply@blogger.com) at February 20, 2016 12:06 PM

January 25, 2015

Giorgios Keramidas

Some Useful RCIRC Snippets

I have started using rcirc as my main IRC client for a while now, and I really like the simplicity of its configuration. All of my important IRC options now fit in a couple of screens of text.

All the rcirc configuration options are wrapped in an eval-after-load form, to make sure that rcirc settings are there when I need them, but they do not normally cause delays during the startup of all Emacs instances I may spawn:

(eval-after-load "rcirc"
  '(progn

     rcirc-setup-forms

     (message "rcirc has been configured.")))

The “rcirc-setup-forms” are then separated in three clearly separated sections:

  • Generic rcirc configuration
  • A hook for setting up nice defaults in rcirc buffers
  • Custom rcirc commands/aliases

Only the first set of options is really required. Rcirc can still function as an IRC client without the rest of them. The rest is there mostly for convenience, and to avoid typing the same setup commands more than once.

The generic options I have set locally are just a handful of settings to set my name and nickname, to enable logging, to let rcirc authenticate to NickServ, and to tweak a few UI details. All this fits nicely in 21 lines of elisp:

;; Identification for IRC server connections
(setq rcirc-default-user-name "keramida"
      rcirc-default-nick      "keramida"
      rcirc-default-full-name "Giorgos Keramidas")

;; Enable automatic authentication with rcirc-authinfo keys.
(setq rcirc-auto-authenticate-flag t)

;; Enable logging support by default.
(setq rcirc-log-flag      t
      rcirc-log-directory (expand-file-name "irclogs" (getenv "HOME")))

;; Passwords for auto-identifying to nickserv and bitlbee.
(setq rcirc-authinfo '(("freenode"  nickserv "keramida"   "********")
                       ("grnet"     nickserv "keramida"   "********")))

;; Some UI options which I like better than the defaults.
(rcirc-track-minor-mode 1)
(setq rcirc-prompt      "»» "
      rcirc-time-format "%H:%M "
      rcirc-fill-flag   nil)

The next section of my rcirc setup is a small hook function which tweaks rcirc settings separately for each buffer (both channel buffers and private-message buffers):

(defun keramida/rcirc-mode-setup ()
  "Sets things up for channel and query buffers spawned by rcirc."
  ;; rcirc-omit-mode always *toggles*, so we first 'disable' it
  ;; and then let the function toggle it *and* set things up.
  (setq rcirc-omit-mode nil)
  (rcirc-omit-mode)
  (set (make-local-variable 'scroll-conservatively) 8192))

(add-hook 'rcirc-mode-hook 'keramida/rcirc-mode-setup)

Finally, the largest section of them all contains definitions for some custom commands and short-hand aliases for stuff I use all the time. First come a few handy aliases for talking to ChanServ, NickServ and MemoServ. Instead of typing /quote nickserv help foo, it’s nice to be able to just type /ns help foo. This is exactly what the following three tiny forms enable, by letting rcirc know that “/cs”, “/ms” and “/ns” are valid commands and passing-along any arguments to the appropriate IRC command:

;;
;; Handy aliases for talking to ChanServ, MemoServ and NickServ.
;;

(defun-rcirc-command cs (arg)
  "Send a private message to the ChanServ service."
  (rcirc-send-string process (concat "CHANSERV " arg)))

(defun-rcirc-command ms (arg)
  "Send a private message to the MemoServ service."
  (rcirc-send-string process (concat "MEMOSERV " arg)))

(defun-rcirc-command ns (arg)
  "Send a private message to the NickServ service."
  (rcirc-send-string process (concat "NICKSERV " arg)))

Next comes a nifty little /join replacement which can join multiple channels at once, as long as their names are separated by spaces, commas or semicolons. To make its code more readable, it’s split into 3 little functions: rcirc-trim-string removes leading and trailing whitespace from a string, rcirc-normalize-channel-name prepends “#” to a string if it doesn’t have one already, and finally rcirc-cmd-j uses the first two functions to do the interesting bits:

(defun rcirc-trim-string (string)
  "Trim leading and trailing whitespace from a string."
  (replace-regexp-in-string "^[[:space:]]*\\|[[:space:]]*$" "" string))

(defun rcirc-normalize-channel-name (name)
  "Normalize an IRC channel name. Trim surrounding
whitespace, and if it doesn't start with a ?# character, prepend
one ourselves."
  (let ((trimmed (rcirc-trim-string name)))
    (if (= ?# (aref trimmed 0))
        trimmed
      (concat "#" trimmed))))

;; /j CHANNEL[{ ,;}CHANNEL{ ,;}CHANNEL] - join multiple channels at once
(defun-rcirc-command j (arg)
  "Short-hand for joining a channel by typing /J channel,channel2,channel,...

Spaces, commas and semicolons are treated as channel name
separators, so that all the following are equivalent commands at
the rcirc prompt:

    /j demo;foo;test
    /j demo,foo,test
    /j demo foo test"
  (let* ((channels (mapcar 'rcirc-normalize-channel-name
                           (split-string (rcirc-trim-string arg) " ,;"))))
    (rcirc-join-channels process channels)))

The last short-hand command lets me type /wii NICK to get “extended” whois information for a nickname, which usually includes idle times too:

;; /WII nickname -> /WHOIS nickname nickname
(defun-rcirc-command wii (arg)
  "Show extended WHOIS information for one or more nicknames."
  (dolist (nickname (split-string arg " ,"))
    (rcirc-send-string process (concat "WHOIS " nickname " " nickname))))

With that, my rcirc setup is complete (at least in the sense that I can use it to chat with my IRC friends). There are no fancy bells and whistles like DCC file transfers, or fancy color parsing, and similar things, but I don’t need all that. I just need a simple, fast, pretty IRC client, and that’s exactly what I have now.

by keramida at January 25, 2015 08:41 AM

January 07, 2015

Murray Stokely

AsiaBSDCon 2014 Videos Posted (6 years of BSDConferences on YouTube)

Sato-san has once created a playlist of videos from AsiaBSDCon. There were 20 videos from the conference held March 15-16, 2014 and papers can be found here. Congrats to the organizers for running another successful conference in Tokyo. A full list of videos is included below. Six years ago when I first created this channel videos longer than 10 minutes couldn't normally be uploaded to YouTube and we had to create a special partner channel for the content. It is great to see how the availability of technical video content about FreeBSD has grown in the last six years.

by Murray (noreply@blogger.com) at January 07, 2015 11:22 PM

December 26, 2013

Giorgios Keramidas

Profiling is Our Friend

I recently wrote a tiny demo program to demonstrate to co-workers how one can build a cache with age-based expiration of its entries, using purely immutable Scala collections. The core of the cache was something like 25-30 lines of Scala code like this:

class FooCache(maxAgeMillis: Long) {
  def now: Long = System.currentTimeMillis

  case class CacheEntry(number: Long, value: Long,
                        birthTime: Long) {
    def age: Long = now - birthTime
  }

  lazy val cache: AtomicReference[Hashmap[Long, CacheEntry]] =
    new AtomicReference(HashMap[Long, CacheEntry]]())

  def values: Hashmap[Long, CacheEntry] =
    cache.get.filter{ (key, entry) =>
      entry.age <= maxAgeMillis }

  def get(number: Long): Long = {
    values.find{ case (key, entry) =>
      key == number && entry.age <= maxAgeMillis
    } match {
      case Some((key, entry)) =>
        entry.value                // cache hit
      case _ =>
        val entry = CacheEntry(number, compute(number), now)
        cache.set(values + (number -> entry))
        entry.value
    }
  }

  def compute(number: Long): Long =
    { /* Some long-running computation based on 'number' */ }
}

The main idea here is that we keep an atomically updated reference to an immutable HashMap. Every time we look for entries in the HashMap we check if (entry.age <= maxAgeMillis), to skip over entries which are already too old to be of any use. Then on cache insertion time we go through the ‘values’ function which excludes all cache entries which have already expired.

Note how the cache itself is not ‘immutable’. We are just using an immutable HashMap collection to store it. This means that Scala can do all sorts of optimizations when multiple threads want to iterate through all the entries of the cache looking for something they want. But there’s an interesting performance bug in this code too…

It’s relatively easy to spot once you know what you are looking for, but did you already catch it? I didn’t. At least not the first time I wrote this code. But I did notice something was ‘odd’ when I started doing lookups from multiple threads and looked at the performance stats of the program in a profiler. YourKit showed the following for this version of the caching code:

JVM Profile #1

See how CPU usage hovers around 60% and we are doing a hefty bunch of garbage collections every second? The profiler quickly led me to line 17 of the code pasted above, where I am going through ‘values’ when looking up cache entries.

Almost 94% of the CPU time of the program was spent inside the .values() function. The profiling report included this part:

+-----------------------------------------------------------+--------|------+
|                           Name                            | Time   | Time |
|                                                           | (ms)   | (%)  |
+-----------------------------------------------------------+--------|------+
| demo.caching                                              | 62.084 | 99 % |
| +-- d.caching.Numbers.check(long)                         | 62.084 | 99 % |
|   +-- d.caching.FooCacheModule$FooCache.check(long)       | 62.084 | 99 % |
|     +---d.caching.FooCacheModule$FooCache.values()        | 58.740 | 94 % |
|     +---scala.collection.AbstractIterable.find(Function1) |  3.215 |  5 % |
+-----------------------------------------------------------+--------|------+

We are spending far too much time expiring cache entries. This is easy to understand why with a second look at the code of the get() function: every cache lookup does old entry expiration and then searches for a matching cache entry.

The way cache-entry expiration works with an immutable HashMap as the underlying cache entry store is that values() iterates over the entire cache HashMap, and builds a new HashMap containing only the cache entries which have not expired. This is bound to take a lot of procesisng power, and it’s also what’s causing the creation of all those ‘new’ objects we are garbage collecting every second!

Do we really need to construct a new cache HashMap every time we do a cache lookup? Of course not… We can just filter the entries while we are traversing the cache.

Changing line 17 from values.find{} to cache.get.find{} does not do cache-entry expiration at the time of every single lookup, and now our cache lookup speed is not limited by how fast we can construct new CacheEntry objects, link them to a HashMap and garbage-collect the old ones. Running the new code through YourKit once more showed an immensely better utilization profile for the 8 cores of my laptop’s CPU:

JVM Profile #2

Now we are not spending a bunch of time constructing throw-away objects, and garbage collector activity has dropped by a huge fraction. We can also make much more effective use of the available CPU cores for doing actual cache lookups, instead of busy work!

This was instantly reflected at the metrics I was collecting for the actual demo code. Before the change, the code was doing almost 6000 cache lookups per second:

-- Timers -------------------------------
caching.primes.hashmap.cache-check
             count = 4528121
         mean rate = 5872.91 calls/second
     1-minute rate = 5839.87 calls/second
     5-minute rate = 6053.27 calls/second
    15-minute rate = 6648.47 calls/second
               min = 0.29 milliseconds
               max = 10.25 milliseconds
              mean = 1.34 milliseconds
            stddev = 1.45 milliseconds
            median = 0.62 milliseconds
              75% <= 0.99 milliseconds
              95% <= 4.00 milliseconds
              98% <= 4.59 milliseconds
              99% <= 6.02 milliseconds
            99.9% <= 10.25 milliseconds

After the change to skip cache expiration at cache lookup, and only do cache entry expiration when we are inserting new cache entries, the same timer reported a hugely improved speed for cache lookups:

-- Timers -------------------------------
caching.primes.hashmap.cache-check
             count = 27500000
         mean rate = 261865.50 calls/second
     1-minute rate = 237073.52 calls/second
     5-minute rate = 186223.68 calls/second
    15-minute rate = 166706.39 calls/second
               min = 0.00 milliseconds
               max = 0.32 milliseconds
              mean = 0.02 milliseconds
            stddev = 0.02 milliseconds
            median = 0.02 milliseconds
              75% <= 0.03 milliseconds
              95% <= 0.05 milliseconds
              98% <= 0.05 milliseconds
              99% <= 0.05 milliseconds
            99.9% <= 0.32 milliseconds

That’s more like it. A cache lookup which completes in 0.32 milliseconds for the 99-th percentile of all cache lookups is something I definitely prefer working with. The insight from profiling tools, like YourKit, was instrumental in both understanding what the actual problem was, and verifying that the solution actually had the effect I expected it to have.

That’s why profiling is our friend!

by keramida at December 26, 2013 04:38 AM

September 25, 2012

Joseph Koshy

New release: ELF Toolchain v0.6.1

I am pleased to announce the availability of version 0.6.1 of the software being developed by the ElfToolChain project.

This new release supports additional operating systems (DragonFly BSD, Minix and OpenBSD), in addition to many bug fixes and documentation improvements.

This release also marks the start of a new "stable" branch, for the convenience of downstream projects interested in using our code.

Comments welcome.

by Joseph Koshy (noreply@blogger.com) at September 25, 2012 02:25 PM

April 02, 2010

Henrik Brix Andersen

Downloading Sony GPS Assist Data Manually

After having bought a new Sony DSC-HX5V digital camera, which is equipped with an integrated GPS, I discovered that it comes with windows-only software for downloading and updating the GPS almanac on the camera (the supplied PMB Portable software runs on Apple OS X, but it does not support downloading the GPS almanac).

After tinkering a bit with tcpdump(1) and friends I found out how to perform the download and update manually:

  1. Download assistme.dat
  2. Download assistme.md5
  3. Verify that the MD5 sum of the assistme.dat file matches the one in the assistme.md5 file
  4. Create a top-level folder hierarchy on the memory card for the camera (not the internal memory of the camera) called PRIVATE/SONY/GPS/
  5. Place the downloaded assistme.dat file in the PRIVATE/SONY/GPS/ folder
  6. Place the memory card in the camera and verify that the GPS Assist Data is valid

I have written a small perl script for automating the above tasks. The script takes the mount point of the memory card as argument.

by brix at April 02, 2010 06:31 PM

March 21, 2010

Henrik Brix Andersen

Monitoring Soekris Temperature through SNMP

Here’s a quick tip for monitoring the temperature of your Soekris net4801 through SNMP on FreeBSD:

Install the net-mgmt/bsnmp-ucd and sysutils/env4801 ports and add the following to /etc/snmpd.conf:

begemotSnmpdModulePath."ucd" = "/usr/local/lib/snmp_ucd.so"
%ucd
extNames.0 = "temperature"
extCommand.0 = "/usr/local/sbin/env4801 | /usr/bin/grep ^Temp | /usr/bin/cut -d ' ' -f 6"

Enable and start bsnmpd(1). The temperature of your Soekris net4801 can now be queried through UCD-SNMP-MIB::extOutput.0 OID (1.3.6.1.4.1.2021.8.1.101.0).

by brix at March 21, 2010 11:39 AM