Category Archives: GNU/Linux

Building from Source Tar Files

Originally written on 04 February, 2010 01:46 AM for the MOSS Magazine Issue #2 (08 February, 2010). I’m republishing it here so that it will be on the public domain as well.

This how to shows how “.tar.gz” files are used in general and what they are. We received an email request from one of the readers asking for an article on how to work with “.tar.gz” files and how applications distributed in tar files can be installed and made use of.

Basically “.tar.gz” files or simply called a “tar file” or “tarball” is an archive format. It usually comes compressed in a format available generally on a GNU/Linux system such as gzip, bzip2 or lzma. A command line program called “tar” exists for the purpose of creating and handling tar files. Simply put, a “.tar.gz” file serves the same purpose as the “.zip” archive format.

Typically GNU/Linux programs are distributed in this format. Most follow the convention of using “program-name_1.0.1_src.tar.gz” for the source code archive and “program-name_1.0.1.tar.gz” for the binary compilation.

Let’s begin with using these files on the latest version of Ubuntu. We’ll also download a small tool as a sandbox to have a look at how programs are built on these platforms from source code. At this point, it is necessary to know how to distinguish source tar files from binary tar files. That way, it would be convenient to learn earlier if a binary package is already generated for the distribution you’re using. For example, on Debian and Ubuntu derivatives, programs are packaged as “.deb” files. Which means that you do not need to download the source tar ball, extract it, configure and build it from scratch.

Let’s download a small utility that let’s you test the performance of websites. The tool is developed and provided by HP systems. You can download the source code at http://httperf.googlecode.com/files/httperf-0.9.0.tar.gz. Once done navigate to the downloaded folder from the command line, eg: /home/user/Downloads/. Issue the following commands to extract it and going about building it. Note that this is a very primitive way of building most programs on GNU/Linux and it should almost be the same for most programs out there.

Let’s go through what’s happening above. The first line runs the “tar” program which handles tar files. The second part “-zxvf” are command line options which tells the tar program what to do. The third argument is the name of the tar file to perform the actions on. You can do the same by right clicking on the file with the Gnome file browser (Nautilus) and selecting “Extract Here”.

The command line options are:

  • z: filter the archive through gzip. Since the tar file is compressed with the gzip compression format, which is indicated by the second file extension of “.gz”.
  • x: extract files from an archive.
  • v: verbosely list files processed (optional). This displays a list of the files that are in the archive and which were extracted.
  • f: use archive file or device. The argument following the options. In our case, the third argument, name of the file.

Now if you do an “ls” or browse to the Downloads folder on your system through Nautilus, you will find a folder named “httperf-0.9.0”.

The second command “cd” changes the current working directory to the newly extracted directory. We then create a folder named “build” with the third command. We change into that newly created “build” directory with the fourth command.

The fifth command is special, in that it configures the source code to be built for your specific distribution. Since different systems have different types of file system standards and different environments, the configure script knows much about the differences and prepares things appropriately.

The sixth command actually tells the system to start compiling the source code to create binary files that can be executed on the system. At this point if you view the “build” directory you will find new files and a “src” folder. If you navigate to the “src” folder, you will find different intermediary build files used by the “make” program and the actual executable named “httperf”. We can run the program here by issuing “./httperf –help”. It will run and display the help information for the program.

The last line is also special in that it actually copies the necessary files to the system paths. It installs the executable in the system’s locally built binaries directory “/usr/local/bin/”, same for the “idleconn” program and finally installs the man (manual page) in “/usr/local/share/man/man1/”, which can be viewed by executing “man httperf” on the command line.

There you have it. You’ve successfully built and installed a program on your system. Now at anytime, you can run the “httperf” program from your command line. This is a typical program build process as mentioned before. It can simply be uninstalled by executing “sudo make uninstall” from the same build directory (“/home/user/Downloads/httperf-0.9.0/build/”).

Now for the difference between a binary tar file. You can extract any type of “.tar.gz” file with the first command as mentioned above. If you list the files extracted with the command “ls -l” it will display the directory in a list fashion with the file permissions, owner, group, file size and the date modified as columns. If there are any files that are in bold or with the an “x” in the file permission block, the file can be executed. All you have to do is type in the command “./program-name” and the program will get executed. A none source tarball will not have the “configure” script and files like “install” or “Makefile.*”.

You can find more about working with tar files by doing a search on the web, which will land you with different sites and weblogs which shares on how you can go about working with tar files as well as building and running programs distributed in tar files.

Why FOSS?

What is the advantage of choosing Free and Open Source software over proprietary software?

Security
Many studies have found that FOSS is less vulnerable to attacks and malware than proprietary systems. And when vulnerabilities are found, they are usually patched faster.

Reliability/Stability
In a 10 month stress-test in 1999, Windows NT (the top windows server platform at the time) crashed an average of once every 6 weeks. None of the Linux servers crashed during that entire period.

Open standards and vendor independence
Open standards give users, whether individuals or governments, flexibility and the freedom to change between different software packages, platforms and vendors.

Reduced reliance on imports
Proprietary software is almost always imported, with the money going out of the country. FOSS is usually financially free, and can also be developed within the country.

Developing local software capacity
With it’s low barriers to entry, FOSS encourages the development of a local software and support industry.

Localisation
The easily-updatable nature of FOSS allows for the fasst creation of software that is tailored to the local language and culture. This is almost impossible with proprietary softawre.

Note: Taken from a flier I got during Apache Asia 2009 Roadshow in Colombo. Supplement prepared by The Linux Center, Sri Lanka.

Apache Asia 2009 Roadshow (Day 1)

Today ends the first day of the (partly) 3 day seminar of Apache Asia 2009 Roadshow at Colombo, Sri Lanka. I was anticipating a lot on attending this event and now that I’ve successfully been able to, I’m grateful for those who’ve put aside time for me to get out of Male’ and attend to it. It’s partly 3 days because the 3rd day is supposed to be an unconference. It’s a clever term and it means unwinding the event with a participant driven conference centered around a theme or purpose and is primarily used in the geek community.

The event has 3 keynote speakers one of whom is a distinguished Sri Lankan professor named Mohan Munasinghe. He is a physicist with a focus on energy, sustainable development and climate change. He was also the Vice Chairman of the Intergovernmental Panel on Climate Change (IPCC), the organization that shared 2007’s Nobel Peace Prize with former Vice President of the United States Al Gore. He talked about sustainable ICT and the environment and climate change in general. You can read more about professor Mohan Munasinghe at Wikipedia.

Following the keynote speech by professor Mohan Munasinghe, Greg Stein delivered his keynote, “Reflecting on 10 years with ASF”. He is a director of the Apache Software Foundation, and served as chairman for a couple years in the past. His talk was particularly interesting as his focus was on relaying identified key elements during his career as a developer, how he came to be a director of the ASF and then chaired the foundation for sometime. He then went on talking about his past experiences and how the audience can similarly relate to him pursuing a similar path.

The rest of the talks were from experienced Sri Lankan developers who have been regularly contributing and driving the course of certain projects of Apache: Axis2, Apache Synapse, Stonehenge and Apache Woden to name a few. How I wished we have contributors back at home.

To emphasis on some of the talks, I could say they were particularly interesting especially because these projects solely target enterprise Service Oriented Architecture (SOA). It means that middleware applications such as those enable in-house developers to tap into and expose data on disparate heterogeneous systems (be it legacy) to be consumed, transformed and utilized by more modern interfaces, and enables to create an interconnected platform.

These systems makes use of industry standard protocols such as SOAP and WSDL, and is guaranteed to interoperate with other stacks, platforms and libraries. Given enough thought on some of these projects are certain to be fruitful for enterprise developers who works heavily on integrations.

For questions asked from the audience at the end of talks, shirts and hats were awarded. As for the unconference, I’m not sure how they selected members. If I remember correctly 25 people were selected from the organizing team. To my surprise my named was called out almost at the end of the day and I was awarded with a cap and invited to the unconference. I’m assuming my contact at foss.lk Suchetha Wijenayake who was on the organizing team must have given out my name.

The agenda can be found at the event website. I shall follow up with the events of the next days of the conference. Cheers.

Linux magazines and DVDs giveaway

Over the course of 2007 I have collected some Linux magazines and DVDs that comes along with it. I’m giving them away now that they’re out dated for me and since they occupy some of my bookshelf.

20091120_magazines-dvds

Here’s a list of issues with links to it’s web page:

  • Linux Magazine. Issue 73. December 2006.
  • Linux Magazine. Issue 74. January 2007.
  • Linux Magazine. Issue 75. February 2007.
  • Linux Magazine. Issue 77. April 2007.
  • Linux Magazine. Issue 78. May 2007.
  • Linux Magazine. Issue 79. June 2007.
  • Linux Magazine. Issue 80. July 2007.
  • Linux Magazine. Issue 81. August 2007.
  • Linux Magazine. Issue 82. September 2007.
  • Linux Magazine. Issue 83. October 2007.
  • Linux Magazine. Issue 84. November 2007.
  • Linux Magazine. Issue 85. December 2007.
  • Linux Magazine. Issue 86. January 2008.
  • Linux Magazine. Issue 87. February 2008.
  • Linux Journal. Issue 158. June 2007.
  • Linux Journal. Issue 159. July 2007.
  • Linux Magazine. Volume 10. Issue 2. February 2008.
  • Linux Magazine. Volume 10. Issue 3. March 2008.
  • Linux Magazine. Volume 10. Issue 4. April 2008.
  • Linux Magazine. Volume 10. Issue 5. May 2008.

That’s about it. All the DVDs are from the German Linux Magazine which is known in the US and Canada as Linux Pro Magazine. The second list of Linux Magazines are from www.linux-mag.com, which dates way back to 1999. Cheers.

Update 2009-11-23: I’ve given them away to @iharis. Bug him to give them away if any one is interested. He’ll be giving them away once he’s done with them.