Saturday, August 8, 2009

Fedora - the name

I am a Fedora Linux (initially Fedora Core Linux) user since version 1. Ever since, I wondered about the name Fedora from time to time. Sounded much like a russian princess. Didn't seem to fit, though.

Today, I was looking for a birthday present to my father. Unlike me, he's an avid walker. So I though, a hat might be a good idea. (He used to wear one in the past.) And while searching, I stumbled about this entry in Wikipedia: A Fedora is a hat! And, if you look at the picture, it very much resembles the famous Red Hat!

A name well choosen.

Thursday, June 18, 2009

The new kid on the virtualization block

Last week, I happily upgraded my Linux installation to Fedora 11. As usual, this wasn't driven by any requirements, just by the fun to have the latest and greatest. (Do I hear Anjas "MEN!" :-) On the whole, things have improved. For example, my UMTS stick is now working out of the box. Monitor switching at work is now working nicely.

An obvious minus, however, is the integration of VMware. I am using VMware server 1, which is almost unsupported on Fedora in general. In particular, it it unsupported for Fedora 11. As usual, it was no problem to find a matching kernel patch. But, what a surprise, it is no longer sufficient to patch the VMware modules: You need to patch the kernel as well. Ok, I detected the procedure within one hour. But that means, that I will have to compile my own kernel with any kernel upgrade. Given the frequency of kernel updates on Fedora, I'll likely recompile every two weeks or so.

Brings up the question for alternatives. I have tried VMware server 2 in the past, but was definitely disappointed, because it is both a resource hog (for example, more than 400MB on disk!) and the lack of the former vmware-console just adds on. Nevertheless, I tried again. Just to detect that the same problem seems to be present: The kernel modules fail to load with the same error message of a missing symbol.

I never tried VirtualBox before, but this seemed to be a good enough reason to do it now. First impressions have been quite positive, until I tried to start a VM with a windows guest system. Obviously, that's not so easy: To get it work, you have to change some settings, fiddle with the windows registry (of course, before starting Windows ...) and similar niceties. Again, that's not what I'd like to have as a standard procedure. VM's should be easily adoptable from or given to colleagues.

So far, I was sticking to VMware server 1. But today, I detected the announcement of VirtualBox 3 beta. And, hard to believe, it just works, even with Windows guests imported from VMware. I think, VMware lost a user...

Monday, June 8, 2009

Automatically compiling VMWare modules

VMWare does, in general, an excellent job. It runs smoothless on my machine, generally being one of my best work horses. In particular, with VMWare Server coming at no cost.

There's an exception, though: The kernel modules. As usual, the problem doesn't exist on a Windows host (or guest, for that matter), but you've got to know how to deal with it on a Linux host. VMWare depends on a number of kernel modules, for example vmnet.o, which creates the virtual network interfaces. In theory, these modules are delivered as part of the VMWare distribution for a variety of Linux distributions. In practice, I am unaware of any example, where these precompiled modules could been used: I always needed to compile them for myself.

Ok, that's not too much trouble (at least not, if you've been able to obtain the right patch for your system, typically after a lot of Googling), basically all you need to do is to run vmware-config.pl (on the host) or vmware-config-tools.pl. However, one problem is still left: The procedure of recompiling the modules needs to happen after any Kernel upgrade again.

But here's a solution: An init-script, which is invoked before VMWare starts upon every reboot. The script checks, whether there are modules matching the running kernel. If not, vmware-config.pl is invoked with the necessary options for batch mode.

To install it, download the script here. Store it as /etc/init.d/vmware-kernel-modules. On Fedora-Linux, create the necessary links by running /sbin/chkconfig --add vmware-kernel-modules. On other distributions, this must be done manually.

Monday, May 11, 2009

Mobile Internet with Fedora 10 and Simply (T-Mobile)

I think, any experienced Linux user knows that feeling: Even while you generally feel fine with your OS, there are these moments when you ask yourself whether you should switch to Windows. Typically, these moments come whenever you've got some new hardware gadget.

In my case, this has been a so-called web'n'walk Stick from Germany's T-Mobile, which I intend to use for connecting to the Internet on travel. Unfortunately, I am quite frequently on the road, or visiting customers, where I cannot connect my Laptop to the network, so this thing will come in quite handy.

Would I be a Windows user (or perhaps a Mac fanatic), this wouldn't be worth posting: Of course, the device is neatly trimmed for Windows, making almost everything automatic. Basically, you plug it in, enter your PIN, and that's it. Not so with Linux, or (to be fair), at least not with Fedora Linux. I've got it running now (in fact, this post is written with the Ethernet disconnected), but that's after several hours of recherche, postings in mailing lists and in the Fedora Forum (not that these had any results). As usual, you need to understand what's going on, making yourself known to technical concepts, command line utilities and configuration files, which you never intended to know. And that's the reason for writing this: Perhaps I can save some other persons time.

The first thing to understand is this: In order to make life easier for Windows and Mac users, the device is running in one of two quite different operation modes. The first mode is the default mode: The stick acts as a very small USB drive. On Windows, this allows to automatically install software from it. Needless to say, there are no Linux drivers. (The good news: All drivers came as part of Fedora. :-) Also needless to say: This default mode is not what you want. You must deactivate it and switch to the other mode, where you can actually use the stick. (I assume that the installed software does that automatically on Windows.)

There are several tools and utilities for doing this switch: usb_modeswitch (which I choosed, because it comes with Fedora), rezero, ozerocdoff, and perhaps a multitude of others. (My guess is, it would help, if distributions could agree on a common choice here.) These tools typically need some configuration. To obtain this configuration, start looking at the output of


[jwi@mcjwi ~]$ /sbin/lsusb
Bus 001 Device 005: ID 413c:8103 Dell Computer Corp. Wireless 350 Bluetooth
Bus 001 Device 006: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader
Bus 001 Device 004: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub
Bus 001 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 004: ID 0af0:6971 Option
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

If you don't know, which line belongs to your stick: Just remove it, run the program again, and see what changes. In my case, the interesting part is

  Bus 003 Device 004: ID 0af0:6971 Option

The ID consists of two parts: The so-called vendor ID (0af0) and the product ID (6971).

If you have usb_modeswitch, you might inspect a file called usb_modeswitch.conf (in my case /etc/usb_modeswitch.conf) and search for the above ID's. There's a good chance that you find them there. If not, enter them into Google, together with usb_modeswitch as an additional search term. Again, there's a good chance to find something. If not: Bad luck. I don't know what to do in that case, but I guess consulting the above tools mailing lists, forums, or whatever.

In my case, I've learned from the configuration file and some Internet pages that I've got to enter the command


sudo usb_modeswitch -v 0af0 -p 6971 -P 6971 -m 0x05 -M 55534243785634120100000080000601000000000000000000000000000000

to change the mode. First step done. If everything's fine, the following command should now provide some meaningful output:

  [jwi@mcjwi ~]$ hal-find-by-capability --capability=modem
/org/freedesktop/Hal/devices/usb_device_af0_6971_Serial_Number_if0_serial_unknown_1
/org/freedesktop/Hal/devices/usb_device_af0_6971_Serial_Number_if0_serial_unknown_0

Now enter NetworkManager (aka nm-connection-editor) and switch to "Mobile Broadband". Press "Add" to create a new connection. If you're lucky, you will find a popup like the following:


The second and third entry are default entries. The presence of the first entry shows you that NetworkManager recognized your stick. Needless to say: In my case, it didn't. After consulting the home page of usb_modeswitch I found out, that I need to load some other driver explicitly:

/sbin/modprobe option

In your case, this might be usbserial, rather than option. Sorry, can't help you here.

But if you can choose your "modem": Almost there! (Not yet, of course.) NetworkManager will open a window like the following:



(In the US, or to be precise, with CDMA this might look quite different.) The PIN and the PUK are what you get from your mobile broadband provider. If you're even luckier, the provider will tell you the other data as well, in the written data sheet, or at least on the hotline. Perhaps, you already guessed it: My provider didn't. ("We aren't supporting Linux.")

To find the remaining data, use Google again. In my case, the relevant search terms have been "T-Mobile APN PIN". (Simply is a reseller of T-Mobile) Another choice might be to consult the list of operators on the Gnome site. I've found the following entry there:

        
147
T-mobile
de
D1
internet.t-d1.de
t-d1

193.254.160.1
193.254.160.130



Tells me the APN (might be some servers host name?), and the password. (You should be able to ignore the DNS servers, as they are configured automatically.) In my case the user name was missing, but some more googling revealed the user name "Internet".

Believe it or not, but now it's working! (Might be, that you need to click on the NetworkManager's applet, if you didn't choose "Automatic Connect" and activate the connection.)

And now you know, why I clearly prefer a lot of Linux experience in profiles or curriculum vitae, when my employer hires: People who deal with that stuff should be ready for the problems we provide them with...

  

Monday, April 27, 2009

Awesome Coldplay video

A video I find myself watching over and over again is Live in Technicolor ii from Coldplay. Not only is it good music, it is also hilariously funny. I can so imagine standing with the other parents at Marie's Kindergarden party, looking absolutely blank at what's going on while the kids are celebrating. (I gather it's better to get used to that feeling.) My personal highlights are the crew members and the bikers crossing the stage.

Apache XML-RPC 3.1.2

Glad to announce the new version of Apache XML-RPC, available from the Apache Mirrors. As usual with this project, the bad olds are that the developers (including myself, of course) are almost gone. The project would be ready for archiving, unless (and that is the good news) it were for the contributors: This release was completely driven by users of Apache XML-RPC. Not a single bug fix or patch was created by the committers: Everything came from contributors through Jira or the mailing list. You can view it for yourself by looking on the changes. (And that list doesn't even include XMLRPC-163.) Imagine that with a closed source product.

Monday, April 20, 2009

NonApache.Org

It is now more than 5 years that I enjoy my status as an Apache committer. I still believe it is something to enjoy, although I am getting older and less active. I have learned to live with the realities that must be accepted: This applies, in particular, to the legal straitjacket, but also to the considerations that a community applies. A community? There are plenty of them, each of which very different: For example, the Webservices project tends to take things rather easy, whereas Commons, or Incubator can be quite formal: Javadoc jar file with or without META-INF/NOTICE.TXT? Good enough reason for hours and hours of discussion. Then, legal-discuss can be quite interesting. And the infrastructure mailing list is always a good place for listening what's going on. Sadly, it never paid for me to listen to jobs@apache.org. :-)

That said, there always are cases when I am not sure whether it's worth the hazzle. Running a project (effectively) on your own within the ASF basically means that you have the actual project work plus ASF related burdens. For example, pushing a Apache projects release can be quite some work: I honestly marvel at people like John Casey who has the patience to deliver 10 release candidates in a row. (I feel exhausted after three or four release candidates.) Compare that to the times of JaxMe 1 when I published a new release after almost any commit. Of course, that won't be possible for projects like Maven, or Axis. But there's something to the "Release early, release often" mantra.

I always thought that this uneasiness with the ASF procedures and the responsibility to the community would be my personal problem, (Let's face it: I'm much more of a Maverick than a team player.) until I read Robert Burell Donkin express similar feelings as the reason for starting RAT at Google Code. But what made me think even more was the recent discussion on a proposed Commons Incubator project: Obviously, there are a lot of projects, which are small and expect to remain small, would like to enter the ASF, but don't manage to overcome hurdles like the rule of at least three committers. Are they doing themselves something good?

The FSF is hosting a server called nongnu.org: It contains a lot of (mostly smaller) projects which manage themselves, which most likely feel attracted by or even close to the FSF and it's ideals. These projects don't have the FSF's holy endorsement (sorry, Incubator members, but that's what "endorse" reads to me sometimes), but they are able to work, to publish releases, to attract a community, almost everything an Apache project does.

I'd like to ask you to think whether it wouldn't it be a good idea to have something similar in the grey area of the ASF. For example, Apache Labs could almost immediately move - and finally start publishing releases. The sandbox projects - what could be a better place for living? I have no suggestions for the organizational details, but I think it's worth considering.



Wednesday, April 8, 2009

10 years no clue

Exactly 10 years ago, early in the morning of the April-08-1999 (according to my perception at that time, it's been 8:30 AM), I entered a room with some 25 or so boys and girls, most of them between 20 and 30 years. My task was to act as a Unix trainer for a professional development class. I didn't know any of them. In particular, I had no idea that, just 7 months later, I'd be married to the girl sitting on the 6th chair in the left row.

It wasn't exactly love at first sight. As for my side, I thought "nice girl" immediately. But, to be honest, there's been more than one nice girl. As for Anja ..., well, read on.

Being a Linux and Unix devotee, my intention was to start with a little Unix history, with the exciting times of people like Ken Thompson, Dennis Ritchie, Brian Kernighan, Alfred Aho, or Peter Weinberger at Bell Labs who managed to contribute both operating systems, programming languages, and other practical tools as well as new and exciting ideas and theories in the field of computer science.

I do not know, whether I managed to share some of my feelings with the other boys and girls. As for Anja, I failed completely: At some point, I managed to confuse Ritchie and Kernighan, naming the latter as one of the Unix inventors. With the experience of lousy trainers before me, she settled on her opinion about me and wrote, after one or two hours, to a friend that I had "absolutely no clue of Unix".

With my experience of 10 years or so of Unix administration and programming before, I can only assume that this hasn't changed in this following 10 years. So, todays my anniversary of having no clue.

To Anja with love,

Jochen

Thursday, March 26, 2009

Git takes the lead

So far, I have been reluctant to use a DVCS. We have got Subversion and it does an excellent job, at least for me. Ok, the idea of local commits while travelling in the train is compelling, but we might as well have inexpensive UMTS flat rates in the future. Maintaining a local branch over a long time is not what I usually do.

But that's not the overall point, if you are curious to try something new (which I still am, at least from time to time). The killer argument against DVCS has been, IMO, that there isn't the DVCS: We've got Git, Bazaar, Mercurial, to name just the most important ones, not to mention Arch, Monotone, or Darcs. I'd be ready to learn using either, but definitely two or even three. And, until now, I never had the impression that learning one of them would be sufficient. For example, SourceForge does support no less than three different DVCS (as well as grandpa CVS and SVN).

That has changed: The ASF has now a read-only Git repository. Ok, currently it seems to be a one man effort by Jukka Zitting (while writing this, it just came to my mind that I owed him a Kudo on Ohloh), but the interest is obvious, as projects are already requesting to be added. Gnome has recently decided to move to Git completely. If you look into Bug 257706 on the Eclipse bug tracker, then you can't fail to notice that it sounds like Eclipse will have it's first Git repositories soon. (Also note the Git BoF.)

Similar news from Bzr or Hg are missing. Ok, Max Kanat-Alexander is pressing Bugzilla to Bzr (he deserved yet another Kudo, btw), Widelands has moved at least its web site to Launchpad, but the list of projects using Git (including Linux Kernel, Perl, Rails, Fedora, or X.org) tends to become more and more impressive.

Nough said, I'll give it a try. My main reservation will be the quality of JGit and EGit (in fact, these will be the most likely candidates for reasons to withdraw), but I should keep in mind that I refused to use Subversion because of the Subclipse quality as late as 2004. Let's see what the future brings.

Saturday, March 14, 2009

As fast as it can get

It's sometimes surprising to find your own name at a place where you really didn't expect it. But in this case it is even more surprising, because I wrote the mail in question not even three hours before a) Yoav wrote his article and b) I detected it on Planet Apache. :-)

Wednesday, February 11, 2009

When dog food isn't good enough

Recently a proposal came up on the Apache infrastructure list (Sorry, AFAIK, the list isn't archived, thus no pointers from here.) to install Nexus Professional on one of the Apache servers as a repository server. The idea was, in particular, to use Nexus' staging facilities for pushing Apache software releases.

The proposal was posted by Brian Fox, with support by Jason van Zyl. As you can see from my links, both are employed by Sonatype, the company producing Nexus. (Jason is founder and CTO). Obviously, the ASF won't be a bad reference for them. The proposal seems to be rapidly acceptet, almost without objection. (There was some discussion, but mostly about technical details or Maven at all.) As a result, you can view the installed Nexus live today, which is what I did.

In all honesty, I'm going to like it. Having had my share of release management and the related trouble, this is going to help: Pushing some 30 or 40 files to people.apache.org (the number seems big, but you have to consider various signatures, like .md5 and/or .sha1 files as well as GPG signatures, aka .asc files) via SSH to a common place and later distributing it manually to various places is error-prone. But that is like it is: Placement in the common place (typically a users public_html directory) allows the review and vote by fellow developers. Once the release is accepted, the files are being moved to the final target locations. Nexus can help with that and the UI is, of course, very neat.

But that is not the reason for todays entry. What strikes me as rather odd is the fact, how easy a commercial product can make its way into the Apache server park. This is not the first such product: Apache is hosting a Jira server for issue tracking as well as a Confluence Wiki. I observed the discussions when these have been introduced. Jira is basically the successor to the Apache Bugzilla (at least more and more Apache projects leave Bugzilla in favour of Jira) just as well as Confluence is quickly replacing the Apache MoinMoin wiki. In both cases the question was raised, whether an open source product wouldn't be preferrable. Ideally, whether there couldn't be an Apache project to use: "Eat your own dog food" has some tradition within the ASF. The Apache web servers are frequently relatively stable development versions. In both cases there have been no such projects, and the open source alternatives had their share of problems. So I understood the decision.

Which is what I don't do in the case of Nexus as an Apache repository server. There is Archiva, an Apache project, which could do the job. Ok, it doesn't have the bells and whistles, but it does its job. I can tell, because I am using it in my daily work. It is a mature project in active development, obviously also sponsored by commercial companies. Ok, it can't support staging right now, but that wouldn't be overly difficult (Brett Porter has offered to add it, should the ASF require) and within a reasonable time frame. Should be enough at least to consider it as decent dog food.

Alas, noone seemed to be interested in the Nexus discussion. So its Nexus. I can live with it. Understanding is a different matter.