Friday 22 March 2013

IPv6 once again

Well, after quite some time of absence, I'm trying to get back on track. New country, new job, new life (no, not really), still no native IPv6 (really!). Sick of waiting for the promised native v6 connectivity from my provider, I've chosen to get an IPv6 tunnel once again. Nothing fancy this time, as I have no dedicated machine (a.k.a my Dreamplug) yet to use as DNS server and router, but it's coming.

I've chosen the only available option for a tunnel, which is through SixXS. Albeit a pretty good service, it's a bit annoying in the registration process. It's also a bit annoying, that you only get a /64 after requesting a tunnel, and have to accrue more credit, in order to have enough to ask for a smaller prefix (which I think will be a /56, but I'm not sure, as I'm not there yet).

Anyhow, this time the instruction are not for FreeBSD, but for Gentoo Linux, which is what I'm running since ages on my Laptop. Currently the Laptop is the only PC in the house, so it's acting as server and virtual machine host and pretty much everything else too.

After having received the tunnel from SixXS it was easy as pie to set it up, following the instructions. For the lazy here a short recap:
  • make sure you have CONFIG_TUN set to m or y in your .config file in /usr/src/linux (where the kernel should reside). I have it as module (m) Check by using: 
    • cat /usr/src/linux/.config | grep CONFIG_TUN
  • emerge aiccu
  • configure aiccu by editing the config file. Just insert the username and the password with which you login to the SixXS webpage on the top, and leave the rest. It will work.
    • nano /etc/aiccu.conf
  • Start the daemon:
    • /etc/init.d/aiccu start
  • Add the daemon to the default runlevel
    • rc-update add aiccu default
Done. Now, given that I've got a /64 subnet, I'm allowed to distribute further addresses out of that prefix, just can't subnet, which for the moment is not important anyway.

So time to set up the router advertisement server - radvd. Again, quite easy to set up:
  • emerge radvd
  • Copy the config example into /etc/radvd.conf
    • bzcat /usr/share/doc/radvd-1.9.2-r1/radvd.conf.example.bz2 > /etc/radvd.conf
  • Edit the config file, and just insert your /64 prefix in the section starting with 
    • # example of a standard prefix 
  • No need to edit the Options within the brackets
  • Comment everything below that section, taking care of not commenting the last bracket at the bottom
  • Save, exit, and start the daemon
    • /etc/init.d/radvd start
  • Add it to the default runlevel
    • rc-update add radvd default
  • The daemon will automatically switch on the packet forwarding sysctls, now all machines on the network can go v6
Firewalling:
I have no IPv6 firewall. There's a simple reason for that: I control all machines on the network, which are my  laptop and one or two virtual machines which either have their own firewall (Windows) or no services running (FreeBSD) or very few services running, for which I'd need to open the firewall ports anyway. So what's the point to have a firewall?

Additionally work has shown me lately, that it's not the unknown ports you should be worried about, but the well known ones, like port 80 or 443 if you have a webserver running, or port 21 for FTP or 22 for SSH. The attacks these days are aimed at those ports, trying to compromise the services behind the ports, rather than looking for unknown services behind non-well-known ports.
Do you need some FW that does DPI or some Application Firewall (like the one that comes with Windows), to get actual security. The rest is just useless.

And these guys agree on that too :-)

Monday 28 November 2011

IPv6 and DNS

So, time again for an update, this time regarding IPv6 and DNS.
Since some time I'm running IPv6 in the lab using a hexago/go6/gogo6/gateway6 tunnel, or whatever it's called these days.
While providing v6 connectivity is the easy part, resolving names and providing an IPv6 capable DNS server to the hosts is a different story.

So first of all, v6 enabled hosts should be able to connect via v6 to a DNS server to resolve names. If they're dual-stacked they can keep going with v4, but if they're v6 only a v6 transport capable DNS is needed.
So I had to run my own DNS server. Nothing special there, BIND is included in FreeBSD, set it up to forward requests, using AARNet's and HurricaneElectric's DNS as forwarders.

But how does the DNS server announce itself to the clients? How will they get an entry in /etc/resolv.conf?

Well there are two possibilities: RFC6106 (better known as RDNSS and DNSSL options for RA) or Avahi.

Starting with FreeBSD 9.0, support for RFC6106 is included, which makes it very easy to run. On the server running the tunnel endpoint and thus rtadvd simply add the following two lines to /etc/rtadvd.conf (obviously change them to your needs):

em0:\
:rdnss="2001:db8::1":\
:dnssl="example.org":

Clients supporting RFC6106 will pick up the advertisement and add it to their resolv.conf.

Unfortunately not all clients support RFC6106, so Avahi supplies an alternative:
On the server, avahi-daemon.conf can be configured to advertise a DNS server by adding the following line:

publish-dns-servers=2001:db8::1

On the clients running Avahi, it's then necessary to run avahi-dnsconfd in order to get the server added to the resolv.conf.

So far so good, Linux (runnind RDNSSD or NetworkManager), FreeBSD 9.0 and modern MacOSX (Lion) boxes will happily accept RDNSS advertisements, older versions of the OSes can run Avahi and get a DNS server. Windows is left out unless you want to deploy a DHCPv6 server as well, which I don't at the moment.

Now a good side effect of using Avahi is that it comes with nss-mdns, which allows to resolve names in the local network via MDNS.
This comes in pretty handy, as IPv6 addresses are a bit hard to remember.

Now the perfect next step would be to have Avahi (or any other mdns client) updating the DNS server with the host names, so that hosts could be accessed directly by their name from everywhere. Unfortunately there is still some work to be done, but it could be fixed soon, by interns at CAIA.

Anyhow, for the moment it means that AAAA records have to be entered by hand into the DNS.

But there comes the next problem:
If I claim the zone "caia.swin.edu.au" in my DNS server but enter only AAAA records, I won't be able to resolve any A record anymore, as my DNS would reply with a negative answer to such queries. There would be no fallback from the client to the additional v4 DNS servers in its resolv.conf, as that only happens in the case of a timeout.

So it would be ideal to claim "caia.swin.edu.au" only for AAAA entries in my DNS, and forward any other queries to my forwarder.

Well, seems BIND can't do that, so I switched to Unbound. I actually like it, and not only because we did already collaborate with Nominet or are in touch with people from NlNet labs :-).

The reason Unbound is great is because it has the zone type typetransparent, which does exactly what I need. If the query for the RR type can be resolved locally, do it locally, otherwise forward it. As simple.

The relevant unbound.conf entry is similar to this one:
local-zone: "caia.swin.edu.au." typetransparent
local-data: "example.caia.swin.edu.au IN AAAA 2001:db8::1"

This line will resolve AAAA queries for example.caia.swin.edu.au locally, but forward A queries for example.caia.swin.edu.au to a forwarder.
You can add as many local-data entries as needed.
The forwarder needs to be configured as well:
forward-zone:
name: "."
forward-addr: 2001:db8::2

That's all you need.

Unbound also has Python bindings, which might make it quite easy to interface Avahi with Unbound as Avahi has Python bindings as well. Something worth to explore in future.

Wednesday 7 September 2011

ZFS: adding disks and changing from non-redundant pools to mirrored

Lately I switched all (non-virtual) FreeBSD machines to use ZFS for their root file system, and it was a good choice!

Example 1:

On this example machine, I initially only had a single, relatively small HDD in there, running the root-fs on ZFS. The output of zpool status:
#zpool status

pool: firstpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
firstpool ONLINE 0 0 0
gpt/bootdisk-zfs ONLINE 0 0 0

errors: No known data errors


After a while, I had the chance to upgrade the system, so I added a second 500GB HDD.
As I wanted the data on that disk separated from the data on the first disk, I created a second zpool:
#zpool create -m /mnt/data secondpool /dev/ada1

The -m defines the mountpoint and mounts the disks. It was ready for use immediately. Too easy, just great.

After a while again, I got an additional 500GB HDD (yes it still fitted into my machine), and I decided to set up a ZFS mirror for my data dir, for safety against HDD failures.
Again just one simple command, and the mirror was working:
#zpool attach secondpool /dev/ada1 /dev/ada2

It took a while for resilvering the new disk, as I put already quite some data on it, but it was the easiest setting up of a mirrored RAID I ever did.
#zpool status

pool: secondpool
state: ONLINE
scan: resilvered 105G in 0h29m with 0 errors on Fri Aug 5 11:13:40 2011
config:

NAME STATE READ WRITE CKSUM
secondpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0


Example2:

On another machine I migrated from UFS to ZFS for the root-fs, but the HDD with data on it still remained UFS formatted, as I had no space to move the data around (the data HDD is 1TB and 80% full). As soon as I got the opportunity of borrowing a large HDD (2TB), I moved from UFS to ZFS.

The easiest way would have been to create a new zpool with the borrowed disk, copy the contents from the UFS disk over to the ZFS disk, then attach the UFS disk to the new ZFS pool, let it resilver, and finally remove the borrowed disk from the ZFS pool, leaving only the original disk, now ZFS formatted in the machine.

Well, unfortunately this only works with same size disks (of course). So I had to do it the long way round:

  1. Create a new zpool with the borrowed disk
  2. Move the content from the UFS disk to the ZFS disk
  3. Create a new zpool with the old (and now empty) disk
  4. Move the contents of the borrowed disk to the old disk (now both ZFS)
  5. Remove the borrowed device from its pool using zpool remove
  6. Destroy the pool using zpool destroy
Obviously you need to  do some planning first for this, in order to get the desired zpool name and mountpoint right for the second zpool which is the one you want, rather than the first zpool. Of course I didn't, but luckily it's so easy to just create and destroy zpools and add or remove devices in ZFS.

Next thing to do: setting up ZFS rolling snapshots. Recommended tool:

  • sysutils/zfsnap
The following /etc/periodic.conf should do the trick (not tested yet)
 
hourly_zfsnap_enable="YES" 
hourly_zfsnap_recursive_fs="sting2-data sting2/usr sting2/var"
hourly_zfsnap_ttl="1m"
hourly_zfsnap_verbose="NO" 
# Don't snap on pools resilvering (-s) or scrubbing (-S)
hourly_zfsnap_flags="-s -S"                       

reboot_zfsnap_enable="YES"
reboot_zfsnap_recursive_fs="sting2-data sting2/usr sting2/var"
reboot_zfsnap_ttl="1m"
reboot_zfsnap_verbose="NO"
# Don't snap on pools resilvering (-s) or scrubbing (-S)
reboot_zfsnap_flags="-s -S"

I'm a bit confused about the zfsnap_delete directives, as I would suspect that the TTL bit would take care of deleting snapshots after the expiration time.

We'll see how it goes.

Tuesday 6 September 2011

FreeBSD ARM on Qemu in a VirtualBox System

After lot's of macking around, and not being happy with the ports cross compile solution, I've been finally able to get FreeBSD-ARM running in Qemu running in a VirtualBox.

Here the steps I took to success:

I've installed FreeBSD 8.2 as VirtualBox guest, as described in my previous post. I've then installed the following ports:
  • devel/subversion-freebsd
  • emulators/qemu-devel
I've then checked out the latest FreeBSD-CURRENT source from svn.freebsd.org and put it into /usr/devel/:

cd /usr/devel
svn co http://svn.freebsd.org/base/head/ .

The next step was to patch the sources in order to make them work with Qemu and the supported Gumstix-connex architecture.

The patch can be downloaded here.

It has been created following the indications found here, here, here and here.

After patching the source:
patch -p0 < gumstix-qemu.patch



the GUMSTIX FreeBSD Kernel and the ARM world can be built and installed into /usr/armqemu/armworld:
make TARGET=arm TARGET_ARCH=arm KERNCONF=GUMSTIX DESTDIR=/usr/armqemu/armworld buildworld kernel installworld distrib-dirs distribution

I've then followed the suggestions on in this message again in order to build a flash image for Qemu to pass to the -pflash command. The necessary GUMSTIX-connex uboot image can be found here.
After obtaining it run:
dd of=flash bs=1k count=16k if=/dev/zero
dd of=flash bs=1k conv=notrunc if=u-boot-connex-400-r1604.bin
dd of=flash bs=1k conv=notrunc seek=256 if=/usr/armqemu/armworld/boot/kernel/kernel.gz.tramp

Now you are ready to boot a kernel, but ther will be no root system available. The solution is to have a diskless machine, using root via NFS.

This gets a bit tricky. In VirtualBox you do already have a DHCP server, but it won't assign any addresses to interfaces not created by VirtualBox itself. So you need to install an additional DHCP server, and run it. It is safer to use a NATed interface in VirtualBox for this case, unless you want the DHCP server to offer addresses outside of VirtualBox.
I used a single VBox interface, which shows up as em0 in the FreeBSD guest, and gets address 10.0.2.15 assigned by default (The VBox default)

Next step is to install package net/isc-dhcp42-server, and to configure it editing /usr/local/etc/dhcpd.conf. I used the following subnet settings (based on this post:

subnet 10.0.2.0 netmask 255.255.255.0 {
range 10.0.2.20 10.0.2.30;
option routers 10.0.2.2;
next-server 10.0.2.15;
option root-path "10.0.2.15:/usr/armqemu/armworld";
}

The next step is to set the NFS export up in /etc/exports:
/usr/armqemu/armworld -maproot=root -network 10.0.2/24

Now we need to make sure that the once the kernel hits the init of the NFS file systems, it is still able to read from the NFS file system, so we need to set the armworld's rc.conf straight.
Edit /usr/armqemu/armworld/etc/rc.conf and add:
hostname="qemu-arm"
ifconfig_smc0="DHCP"
sshd_enable="YES"
rpcbind_enable="YES"
nfs_client_enable="YES"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"

Then we need to make sure that the Qemu guest is capable of reaching the host system via the network interface. This is described in this wiki entry. In order to make it work you need to do the following:
kldload aio
kldload if_tap
kldload if_bridge
ifconfig tap0 create
ifconfig bridge0 create
ifconfig bridge0 addm tap0 em0 up
ifconfig tap0 up

In my case, tap0 would never stay up for long, unless it's used. I've read somewhere that there is a sysctl to fix that, but can't find the link anymore.
In order to make the tap bridge available on each reboot, edit /boot/loader.conf and add:
aio_load="YES"
if_tap_load="YES"
bridge_load="YES"

You also need to edit /etc/rc.conf on the server to get the tap and bridge set up properly and also to get the NFS server running.
The necessary additions to /etc/rc.conf are the following:
cloned_interfaces="tap0 bridge0"
ifconfig_bridge0="addm tap0 addm em0 up"
ifconfig_tap0="up"
rpcbind_enable="YES"
mountd_enable="YES"
mountd_flags="-r
nfs_server_enable="YES"
nfs_server_flags="-t -u -n4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
dhcpd_enable"YES"
dhcpd_ifaces="em0"

Now the DHCP and NFS servers can be started with:
/usr/local/etc/rc.d/isc-dhcpd start
/etc/rc.d/rpcbind start
/etc/rc.d/mountd start
/etc/rc.d/nfsserver start
/etc/rc.d/lockd start
/etc/rc.d/statd start

In my case /etc/rc.d/nfsserver start wouldn't do anything, so I had to start nfsd like that:
nfsd -t -u -n4

After setting up everything it should be possible to start Qemu, boot the kernel and run FreeBSD from the NFS mounted root.
The start command (from here):
qemu-system-arm -M connex -m 289 -pflash flash -nographic -serial tcp::4000,server -net nic,macaddr=23:32:43:34:54:45,vlan=0,model=smc91c111 -net tap,vlan=0,ifname=tap0,script=no

Then in a different terminal connect to the machine via telnet:
telnet localhost 4000

Qemu will then boot the GUMSTIX u-boot first and try to load the kernel from a wrong address. At the prompt enter:
GUM> bootelf 40000

You should see the kernel boot, acquire a DHCP address (10.0.2.20) with the root-path option and then boot init from the NFS root. If everything worked, you'll end up at a login screen.

Thursday 25 August 2011

Cross compiling ports for ARM under FreeBSD

So this is something I've been working on and off for the last few months, but now I think I found the probably most elegant solution, although it's not working for every single port so far.
I managed to compile important ports like net/mpd5 and www/thttpd as well as some others, but failed for now with net/gateway6 or net-p2p/transmission-daemon.
But some tweaking might fix that.

But in general the following steps will get you to your cross-compiled ports:

1) Get the cross-compiler and tools:

This step is quite easy to achieve. Check out the source tree into /usr/devel (or whichever folder you prefer) and run

make TARGET=arm KERNCONF="" kernel-toolchain toolchain

This will place the cross-compiler, all the necessary tools and the libraries into /usr/obj/arm.arm/usr/devel/

2) Create the cross-compile environment

I started an extra shell (bash), in order not to mess up my current environment, and exported the following env variables:

export CC=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/gcc
export CPP=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/cpp
export CXX=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/g++
export AS=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/as
export NM=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/nm
export RANLIB=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/gnu-ranlib
export LD=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/ld
export OBJCOPY=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/objcopy
export SIZE=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/size
export AR=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/gnu-ar
export STRIPBIN=/usr/obj/arm.arm/usr/devel/tmp/usr/bin/strip
export MACHINE=arm

export MACHINE_ARCH=arm

3) Compile a port:

First create two folders, one into which to install the ports, one to use as working directory for the port. In my case /home/arm/portinstall and /home/arm/portwork. This keeps things clean.

Then cd into a ports directory and issue the following command:

make PREFIX=/home/arm/portinstall WRKDIRPREFIX=/home/arm/portwork CONFIGURE_ARGS+="--host=arm --prefix=/home/arm/portinstall" LDFLAGS+="-rpath=/usr/obj/arm.arm/usr/devel/tmp/lib:/usr/obj/arm.arm/usr/devel/tmp/usr/lib:/home/arm/portinstall/lib:/usr/obj/arm.arm/usr/devel/lib -L/usr/obj/arm.arm/usr/devel/tmp/lib -L/usr/obj/arm.arm/usr/devel/tmp/usr/lib -L/home/arm/portinstall/lib -L/usr/obj/arm.arm/usr/devel/lib" NO_PKG_REGISTER=1 clean install

The command explained:
PREFIX sets the installation prefix for the port
WRKDIRPREFIX sets the working directory
CONFIGURE_ARGS+= are additional arguments passed to the configure script, which does not always observe the environment variables. Therefore prefix and host are declared again. For ftp/curl for example you also need to pass --without-random.
LDFLAGS sets the library search paths. As you're cross-compiling you need to make sure that the program links to the correct library, the ARM one not the one of the host you're compiling on.
This is fixed using the -L flags and the -rpath flag. Experience has shown you need both.
NO_PKG_REGISTER needs to be set in order to avoid the package being registered for the local system. If you cross-compile a port that is not already also installed in the current system, you also don't want the system to think it has been installed, where it's actually not, as it's been cross-compiled.

The information in this post is based on the outdated information found here

Some ports might need additional tweaking, by passing additional options to LDFLAGS or CONFIGURE_ARGS.
Ports might also need additional compilers or make tools (like gmake) and might try to install them as part of the ports installation process. You have to make sure that they are installed on the local system beforehand, as the cross-compiler will generate ARM binaries otherwise, and the cross-compilation of the main port will fail.

An other possibility is to create a separate jail in which you do the cross compilation and install the ports in the default directory instead of using the PREFIX directive. It's a bit more messy as all ARM and non ARM ports are installed in the same location, but you can use a

make package

and then install the package on the destination machine.
The biggest problem in this situation though are the ports which have the same port as build dependency and run-time dependency. In that case you'd need an ARM and a non ARM port in the same place, which won't work..
Having not personally tried the jails solution, I can not suggest a proper workaround for this problem.

[edit]
Aleksandr Rybalko also has some hints about cross-compiling which can be found here
Specifically point 2 in that blog deals with dependencies.

Installing Matlab 2011a on FreeBSD

After having had to move my Linux box from my desk due to Occupational Health and Safety issues (It seems that 5 PC's around you are just too many), I had to install Matlab on my main PCBSD 8.2 box.
A scary task after having failed with 2009b, and 2010a a while ago (on a box running 8-CURRENT though).

Now it seems to work just fine. All you need is the Linux compatibility layer and libraries and tweak some of the files:

For the installation:

Edit the "installer" file and replace

#!/bin/sh
with
#!/compat/linux/bin/sh

Then install Matlab, muck around with the license stuff, and cd to the installation folder.

In the installation folder edit file "bin/matlab" and replace

#!/bin/sh
with
#!/compat/linux/bin/sh

as before. Then edit "bin/util/oscheck.sh" and again replace

#!/bin/sh
with
#!/compat/linux/bin/sh

Then start Matlab by simply typing "matlab"

It worked, and the java interface still crashes when trying to work with multiple directories containing lots (about 10000 or more) of files, like it did with 2010b under Linux.
Well, gotta live with that.

Monday 11 April 2011

Konica Minolta Bizhub C652, C652DS, C650 driver with Account Tracking for CUPS

We recently got a new printer on the 6th floor of the EN buiding where our Faculty and Research Group resides. It's a Konica Minolta Bizhub C652DS, the slightly newer model of the C650 found on the 5th floor.

Quite a massive printer, with additions like folding, stapling and hole punching units etc.
The CUPS drivers from Konica are actually not too bad, with the exception that they do not support account tracking, which is used within the faculty.

With the help of John and Jason and some searching I was able to create two PPD files for the C652 and C650 printers, using the femperonpsc250mu.pl filter by Rui Ferrera.

It's now as simple as copying the femperonpsc250mu.pl script to the cups filter directory (like /usr/libexec/cups/filter) and setting the right owner and permissions, and then installing the printer using the new C652 and C650 PPD files found here, using the lpd:// protocol.
(It might also work via https:// as described here)

As CUPS does not yet allow to create an input field for options, you need to set the account key etc. by pull down menu (or combobox or whatever you call them).

It also works if you install the printer on a single CUPS printserver, and just point the various clients to it. In that case anyhow you'll need to set your own passcode or account key in each printing UI (like once for KDE apps once for GTK apps)

Happy printing!

Download the tarball