Category Archives: Linux

Apps worry me

Apps are worrying me.  I should explain.

By apps, I means the apps that you installed on you Android phone or iPhone.  They’re great and useful (sometimes), but they are not free in an Open Source way at all (and here I’m referring to Android) and yet I heard lots of so called Open Source people talking like this is the new way.

Rubbish.  Shotofjaq talked about using apps as an example to gain games for Linux (because lets face it Linux games are mostly crap), via micro payment.  But that undermines the hard works that, for example, the GNOME and KDE guys have put in, for FREE.  Why should games be any different?  What makes games so special that its OK to pay for them, and not OK to pay for Open Office? And do I get the source code if I pay for it?  Somehow I doubt it!  Jono and Aq really need to think about what they are saying here.

Apps, in the Android world are akin to the freeware world of 10-15 years ago.  Great stuff, but it was replaced by Open Source.  My worry if that this is being slowly reversed by Apps and no one is seeing it, and when they do it’ll be too late.

Do I really think the iPad is a problem, no I don’t, that’s over hyped, but its amazing how many people are using their phone to do 90% of what you’d use a computer to do 2 years ago, email, twitter, casual browsing etc.  Thats the worry, not tablets.

Maybe I’m over worrying, but somehow I don’t think so.

Linux Package Formats

Unlike most things in Linux, packages in Linux are not a single standard rather they come in many formats, though the most popular of course are rpm and deb packages.

The deb and rpm holy war has raged on for many years and originally, without doubt Debian deb packages where the superior package format, with the package manager being able to handle and install any dependances a package needed to work.  rpm packages did not have this luxuary and the phrase “rpm dependancy hell” was rightly coined.

These days, with yum and zypper package managers, those days of rpm dependancy issues are long gone, but the idea that it still exist is still held onto by some.

However for me, rpm is now the better package format to use.  10 years ago the reverse would have been true.

Why? Because building rpm package is straight forward and requires only a small learning curve whilst deb are a complete pig to build as many will agree with.

rpm spec files are very easy to work with, as is rpmbuild itself.  Creating yum repository is simply a case of starting a http server and creating a standard repository directory structure, placing the rpms you need into in it and running creatrepo.

If you still think debs are superior to rpms its worth you revisiting that assumption.  I believe you’ll find the opposite is true.

Ubuntu

Before I start I should point out that I love Ubuntu and think its one of the best Linux distros out there.  I still use it one many of my PCs and will do so for the forseeable future……but…….

I think that Karmic was a very bad release for Ubuntu.  Its been, unfairly, called Ubuntu’s Vista, because of the many issues there have been with it.  I have been stung by the nasty problem it has with SSD based systems with my EEEPC, which was a bit scary.  But I don’t really see that as a problem so much as the approach taken my various employees to these issues.

I very much disliked the attitude of some.  The most moronic comment to come out was went along the lines of “this just shows how popular Ubuntu is”.  WTF??  It won’t be popular if there is another release like this.

If Ubuntu is truely Linux for the masses then the idea that the “community” should help test and find the bugs has got to go.  Can’t believe I’m saying that, as I really believe in it, but to run a company hoping that this approach will ensure you product will be solid is not gonna fly.

The best route Ubuntu can take is to stop the 6 month upgrade cycle insanity.  No one cares that much, outside of the hardcore set (and they can handle this by themselves).  At most a 12 month upgrade is enough, but I’d be tempted to move that to 18 month, with core applications such as Firefox and Openoffice being kept relatively up to date within the time frame.

Why?  Well, there just isn’t enough time to fix all the bug, and issues that there are in the current Karmic release, so they get labelled won’t fixed, or take just too long to fix.  With a saner time frame there would be more time for people to spend fixing these issues, without having to worry about the next release.

The other option is to be more honest about what the 6 months releases are.  If they are developement releases, call them as such so people know what they are installing and therefore are not shocked when it doesn’t quite work correctly.  This reduces the negative opinions as you have told them that this is a not-quite-there release.

Most people just want they PC to work.  My LTS system does this fine, it just works and does everything I want it to do (although sound has just broken…again).

So, in a nutshell, Ubuntu please stop worrying about having the latest bells and heavily concentrate on building a truely quality system that everyone who loves Linux really wants.  Forget the 6 months cycle, it’ll kill you off.

Advanced distro myth

There is a dumb idea that out there in Linux distro land that less means more. Take Crunchbang. It positions itself as a distro for advanced users because it uses Openbook rather than Gnome that Ubuntu uses, which it is based on.

Why does this make it so? If you prefer a minimal desktop, great good for you, I have no problem there. But a desktop choice does not make you an advanced user. Far from it.

Crunchbang is no more an. advanced users distro than any of the top distros. If you want a real advanced users distro use Gentoo or Arch, but you’ll spend all you time fixing it. You’ll learn a lot, but do little, the choice is yours.

Just don’t believe that using a minimal desktop makes you an advanced user.

Arch Linux

Been meaning to mention this for quite a while, but just never seemed to have time.

As I said before I used Easy Linux (yukky name etc etc), which was fine as it goes, nice interface and so on, on my eeepc.  However, the boot time (like most of the 8.x series of Ubuntu)was pretty slow and GNOME is a heavy desktop for a netbook.

So I decided I wanted something lighter, and that meant the easiest route was to ignore most of the “popular” distro as this always come with heavy desktops.  Arch Linux therefore came to mind as I’d used it in the past and very much liked it. Arch Linux would allow me to install exactly what I wanted and nothing else.

So, which desktop?  Well, I wanted a degree of functionality, so Fluxbox was out, and XFCE is very nice, but, franky, I’ve used it for years at work so wanted something else.  E17 therefore seemed the best option.

Right, so installing it.  Easy.  There is a guide on the Arch Wiki on how to do this, and I was up and running, console wise anyway, pretty quickly.  X was another question.  That took quite awhile to get up and running as allowing X to guess didn’t work, so had to hand hack a confg which I’m out of practise with.  But got that running eventually.

E17 is a geat desktop and I’m glad I made that my desktop.  It is amazingly lightweight, but still provides the functions I need on a netbook.  If you haven’t looked at it, do.

Now I have a netbook that boot in less than 20 seconds, and a desktop that looks and works great.

Arch Linux is how I remember, utterly flexible, allow you to build a system the way you want, and I recommend it highly

Linux TCP auto-tuning

There does seem to be a lot of confusion over how Linux auto-tuning for TCP works, so here are so links to documents that outline in good details exactly how this does work. After reading all of these you should be up to speed on this topic.

http://www.csm.ornl.gov/~dunigan/netperf/auto.html

http://public.lanl.gov/radiant/pubs/hptcp/hpdc02-drs.pdf

http://www.psc.edu/networking/projects/tcptune/#Linux

http://www.broadnets.org/2004/workshop-papers/Pathnets/03_TCPHighSpeedWAN-SylvianRavot.pdf

http://fasterdata.es.net/TCP-tuning//linux.html

CPU Affinity and taskset

No matter how performant any code is, the architecture of the server it is being run on will have an impact on the performance profile. This impact could, hopefully, be minimal, but if the application is threaded and/or latency sensitive this is unlikely unless this has been taken into account.

Processor Architecture

Unfortunately, especially with Intel’s most recent multi-core processors, processor architecture needs to be taken into account to gain the best performance from an application. If the application being deployed requires 2 cores, or is expected to perform better over 2 cores, which cores are used can have a bearing upon the performance of the application.

This is because of the architectural approach taken on the latest Intel processors. On the Intel Clovertown and Harpertown Xeon processors, the L2 cache is not shared across all cores. Within a single processor there are 4 L1 caches, 1 per core and 2 L2 caches, shared between a pair of cores. In addition the pairing of the L2 cache between the cores is also different between architectures, just to add an additional level of complexity.

CPU Affinity Overview

There are two types of CPU affinity. The first, soft affinity (also called natural affinity) is the tendency of the scheduler to try to keep processes on the same CPU as long as possible. It is merely an attempt; if it is ever infeasible, the process is migrated to another processor. The O(1) scheduler in 2.6 exhibits excellent natural affinity. On the opposite end, however, is the 2.4 scheduler, which has poor CPU affinity. This behavior results in the ping-pong effect. The scheduler bounces processes between multiple processors each time they are scheduled and rescheduled. It should be noted that Red Hat back-ported the O(1) scheduler into RHEL3 (in addition to many others changes) and that the 2.4 kernel in that release is really a mix of 2.4, late 2.5 and early 2.6 kernel sources.

Hard affinity, on the other hand, is what the CPU affinity system call provides. It is a requirement, and processes must adhere to a specified hard affinity. If a processor is bound to CPU 1, for example, then it can run only on CPU 1.

CPU Affinity Benefits

The first benefit of CPU affinity is optimizing cache performance. The scheduler tries hard to keep tasks on the same processor, but in some performance-critical situations, i.e. a highly threaded application, it makes sense to enforce the affinity as a hard requirement. Multiprocessing computers try and keep the processor caches valid. Data can be kept in only one processor’s cache at a time; otherwise, the processor’s cache may grow out of sync. Consequently, whenever a processor adds a line of data to its local cache, all the other processors in the system also caching it must invalidate that data but this invalidation is costly. But the real performance penalty comes into play when processes bounce between processors as they constantly cause cache invalidations, and the data they want is never in the cache when they need it. Thus, cache miss rates grow very large. CPU affinity protects against this and improves cache performance.

A second benefit of CPU affinity is if multiple threads are accessing the same data, it can make sense to bind them all to the same processor. Doing so guarantees that the threads do not contend over data and cause cache misses. This does diminish the performance gained from multithreading on SMP, however if the threads are inherently serialized, however, the improved cache hit rate can negate this.

The third benefit is found in real-time or time-sensitive applications. In this approach, all the system processes are bound to a subset of the processors on the system. The application then is bound to the remaining processors. For example in a dual-processor system, the application would be bound to one processor, and all other processes are bound to the other. This ensures that the application receives the full attention of the processor.

Implementing CPU Affinity under Linux

There are 2 methods to implement cpu affinity, within the source code of the application itself using the sched_getaffinity system call or by use of the command line tool taskset.

Using taskset to assign CPU affinity

Under Linux it is straight forward to bind an application to one or more cores via the taskset command. Once you know the processor type you are using, and therefore the allocation you require, taskset can be used to either start the application bound to the correct cores or to rebind an already running application. For example:

taskset –c 2,6 <application>

The above taskset command is for a HarperTown based system and is therefore binding an application to core 3 and 7 (taskset start at cpu0 hence the num-1). In order to bind a process to a cpu(s) taskset needs to be run by root.

taskset can also be run on an existing application to change its processor binding(s) if required as follows:

taskset –c 2,6 –p <pid>

To verify that a taskset binding has worked, or to verify what the binding profile of an already running application, run, as any user:

taskset –c –p <pid>

This will return the core(s) being used by the process.


Vim tricks

Smarter searching

Although searching with vim would appear to be straight forward, what happens if you misspell a word, or are not completely sure how it is spelt?  With the standard search you are stuck, however vim has a search mode called incremental search which starts searching as soon as you start typing the word you wish to find.  This is incredibly useful but is not enabled by default.  Luckily all you have to do is enter the following in the command window

:set incsearch

to enable incremental searching.  Now when you enter search mode vim will start matching straight away and highlight each match that is found until you find what you are looking for.  Once you have found what you wanted you can simply hit return or ESC to exit the search.
However there is still one issue is still outstanding even with incremental search enable, matching is still case sensitive.  Therefore Match or match or matcH would need to be searched for separately.  As luck would have it you can also setup vim to be non-case sensitive by enabling the following options in the command window

:set ignorecase
:set smartcase

Those coming from vi will recognise the ignorecase option but what is smartcase?  Well smartcase, when set in conjunction with ignorecase, sets vim to search for any case unless uppercase is used in the search pattern.  If this is the case then vim assumes that you want to perform a case-sensitive search.  So this gives you options when you perform a search.

If you find these useful and want them permanently enabled you need to edit, or create, a .vimrc file in your home directory, and add the following:

set incsearch
set ignorecase
set smartcase

On most Linux system you should find the default vimrc file located in /etc.  So if you are creating your own version it is usually best to copy the default version into your home directory, rename it to .vimrc and then edit.

Working with files

vim can be made to act like a file browser, simply enter:

:e .

Vim will now display the contents of the current directory, which you can transverse using the arrow keys or navigation keys.  To open a file within vim you simply navigate to the file you wish to open and press enter.  Directories can be traversed in the same manner by simply navigating to the directory you wish to open, pressing enter and you will be presented with the contents of the directory.  Should the contents of a directory be large you can also perform usual vim searches.

Of course within vim you can open any files without using the file browser by simply entering the name of the file you wish to view.  But what if you are not completely sure of the name?  Well, vim has another trick up it’s sleave, file completion.  This works much in the same way as shell file completion.  You simply enter a few character of the file in question and press the tab key for vim for complete the filename.  If there are multiple files that match the characters you have entered you simply press tab until you come to the file you want to edit and then press return.

File completion is fully configurable, this is vim after all, by modifying the wildmode parameter either within your vimrc file or within vim itself.  To see all the possible settings enter

:help wildmode

within vim.  By default this is set to full mode and acts as outlined previously.

The previous examples all work on filenames themselves, but what if you wanted to find a file, or files, that contain a common string?  Obviously from the shell command line you would use grep to show you all the files than contain a common string, and you can do the same with vim.  You simply run up vim, and then run:

:grep <string> *

and vim will find all files that contain the string and open them.  You then move between the files using:

:cn – go to next file
:cp – go to previous
:cc – show current match

When you view each file vim starts at the location of the match in that file and you use :cc to go back to that match should you want to look at it again.  You use :cn, :cp, :cc instead of the usual :n and :N as this is using Vim error handling and therefore vim stores the matching files differently.

Editing multiple files within one vim session

Vim, like vi, can edit multiple files at the sametime and use the :n (next) and :N (previous) keys to tranverse over these files.  But you can also use split screen mode to edit files are well, by entering:

:split

Now the screen is split in half with a horizontal rule.  You can use further split commands to split the screen into further segments.  Then to move between the windows you use CTRL-W and the arrow key, up to go up a window down to go down a window.

This makes editing multiple files easy, especially if you want to view both files are the same time.

If splitting the screen is not to your liking, or you have multiple files you wish to edit vim version 7 or greater has a new feature which may help you out, tabbing.  This works by adding a tab to, by default, the top of the screen with the file you are editing as the label.  To use tabbing within vim you enter:

:tabnew <filename>

Don’t forget that you can also use the auto completion feature of vim when using the tabnew command.  Vim will open this file is a new tabbed window ready for you to edit.  Another, possibly quicker, method is to supply all the files you want to open at the shell command line using the -p flag:

vim -p file1 file2 file3 file4

To navigate between tabs you have the following commands:

gt – next tag
:tabn – next tag
:tabp – previous tag
:tabfirst – first tag
:tablast – last tag

As mention the tabs are displayed at the top of the screen.  If you find this a distraction you can turn it off by running:

:set showtabline=0

And switch it back on using:

:set showtabline=2

If you prefer not to see the tabs at all you simply add the following to your .vimrc file

set showtabline=0

With tabbing display switched off there will of course be times when you want to remind yourself of all the tabs you have running.  The following command provides such a summary:

:tabs

Vim will display a list of tabs and the file associated with it within the command window.

Lastly, one of the most using features of tabbing is the ability to perform an action on all of the tabs that are open via the tabdo command.  As a simple example the following replaces the text fred with joe in all open tabs:

:tabdo %s/fred/joe/g

Spell checking

Purists looks away, but yes, vim now comes with a built in spell checking you can use.  This is usually disabled by default, to enable spell checking enter the following:

:setlocal spell spelllang=en_gb

You would replace en_gb with language region code as you require. Vim will now highlight words that are incorrectly spelt in red, and rare or uncapitalised words in blue.  To navigate the words vim thinks are incorrect you use the following common key combinations:

]s to go to the next misspelled word
[s to go the previous misspelled word
z= display suggestions for correct spelling
zg add word as a correctly spelt word

There are many other key combinations and options available for you to explore with vim spell checking, run

: help spell

to see the extended documentation vim provides.

Visual Mode

With vi if you wanted to edit a section of code or text within a file you would place the cursor on the last line that you wish to edit and run the following command to place a marker:

:ma

If you wanted to copy this section then you would place the cursor at the start of the section and enter:

:y’a

Which would yank all the code or text from the cursor to the marker ‘a’ you just placed.  Then you would move the cursor to where you wanted to place the section and enter:

:p

With vim this all works fine, but there is a slightly quicker way of achieving this using vim’s visual mode.  In visual mode you highlight the text you wish to edit and then type an editing command.  There are three modes available, line by line, character and block mode.  Line by line and character are probably the most useful but block mode may be handy if you need to edit text tables.  These modes are available via the following key combinations:

SHIFT-v – line by line mode
v – character mode
CTRL-v – block mode

So now with vim’s visual mode, to copy a section of code, as per the vi example you would simply enter visual mode and highlight the code you wish to copy then run:

:y

To yank the highlighted code into an unnamed buffer, move the cursor to where you wish to locate the code and then run:

:p

To place a copy of the code there.  Slightly simpler, and slightly quicker