skip to main content - dyslexic font - mobile - text - print

Hacker Public Radio

Your ideas, projects, opinions - podcasted.

New episodes Monday through Friday.

Correspondent


Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/hpr/public_html/correspondents.php on line 47


Host ID: 131

episodes: 16

1364 - Vintage Tech Iron Pay Phone Coin Box | 2013-10-24

A review of vintage tech, in the form of an iron pay phone coin box.

photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box


1363 - Some pacman Tips By Way of Repacing NetworkManager With WICD | 2013-10-23

A while back, I used my Arch laptop to pre-configure a router for a customer, which of course required me set up a static IP on my eth0. I should have done this from the command line, instead I used the graphical Network Manager. I had a lot of trouble getting the graphical application to accept a change in IP, and in getting to go back to DHCP when I was done, and I wound up going back and forth between the Network Manager and terminal commands. I've mentioned before I my ISP is behind two NATed networks, the router in the outbuilding where the uplink to the ISP is (this is also the network my server is on) and the router in my house. The static IP I used for the customer router configuration was in the same address range as my "outside" network Though I successfully got eth0 back on DHCP, there was a phantom adapter still out there on the same range as the network my server was on, preventing me from ssh'ing in. I did come across a hack, if I set eth0 to an IP and mask of all zeros, then stopped and started dhcpcd on eth0, I could connect. I had also used the laptop on a customer's WiFi recently, and the connection was horrible.

I decided to see if just installing the wicd network manager would clear everything up (and it did), but before installing Wicd, I had to update the system, so first a little bit about pacman

Arch's primary package manager is pacman. The -S operator is for sync operations, including package installation, for instance: `` # sudo pacman -S <package_name> ..... installs a package from the standard repos and is more or less equivalent to the Debian instruction .... # sudo apt-get install <package_name>``

The option -y used with -S refreshes the master package list and -u updates all out of date packages, so the command `` # sudo pacman -Syu .... is equivalent to the Debian instruction .... # sudo apt-get update .... followed by .... # sudo apt-get upgrade # sudo pacman -Syu <package_name1> <package_name2> would update the system, then install the selected packages``

Perhaps because of my slow Internet, the first time through a few of the update packages timed out without downloading, so nothing installed. The second time through, even one of the repos didn't refresh. Thinking this was a connectivity problem, I kept trying the same update command over and over. Finally, I enlisted the help of Google. 'pacman -Syy' forces a refresh of all package lists "even if the appear to be up to date". This seems to automagically fix the timeout and connection problems, and the next time I ran the update, it completed without complaint. I was mad at myself when I found the solution, because I remember I'd had the exact same problem and the exact same solution before and had forgotten them. Podcasting your errors is a great way of setting them in your memory.

About the same time, I ran out of space on my 10Gb root partition. I remembered Peter64 had a similar problem, but I found a different solution than he did. # sudo pacman -Sc .... cleans packages that are no longer installed from the pacman cache as well as currently unused sync databases to free up disk space. I got 3Gb back! pacman -Scc removes all files from the cache.

https://wiki.archlinux.org/index.php/Wicd

Use pacman to install the package 'wicd' and if you want a graphical front end, 'wicd-gtk' or 'wicd-kde' (in the AUR). For network notifications, install 'notification-daemon', or the smaller 'xfce4-notifyd' if you are NOT using Gnome.

None of this enables wicd or makes it your default network manager on reboot, that you must do manually. First, stop all previously running network daemons (like netctl, netcfg, dhcpcd, NetworkManager) you probably won't have them all. Lets assume for the rest of the terminal commands, you are root, then do:

# systemctl stop <package_name> i.e # systemctl stop NetworkManager

Then we have to disable the old network tools so they don't conflict with wicd on reboot.

# systemctl disable <package_name> i.e. # systemctl disable NetworkManager

Make sure your login is in the users group # gpasswd -a USERNAME users

Now, we have to initialize wicd # systemctl start wicd.service # wicd-client

Finally, enable wicd.service to load on your next boot up # systemctl enable wicd.service


1356 - So, you've just installed Arch Linux, now what? Arch Lessons from a Newbie, Ep. 01 | 2013-10-14

Manually installing packages from theAUR

Since completing my conversion from Cinnarch to Antergos, (http://antergos.com/antergos-2013-05-12-were-back/, the published tutorial didn't work for me the first time, but the new Antergos forums were most helpfulhttp://forum.antergos.com/viewtopic.php?f=28&t=944&p=2892#p2892), a few utilities I installed under Cinnarch seem to be unavailable, notably, 'yaourt'Yet An Other User Repository, the package manager for the AUR (Arch User Repositories).[The AUR are unofficial, "use at your own risk" repositories, roughly analogous to using a ppa in Ubuntu.I tried 'sudo pacman -S yaourt' and learned it wasn't found it the repositories (I should note that when I removed the old Cinnarch repos from /etc/pacman.conf, I must have missed including the new Antergos repos somehow). I have since completed the transition.

Anyway, some experienced Arch users like Peter64 and Artv61 had asked me why I was using yaourtanywayinstead of instead of installing packages manually, which they considered to be more secure. I decided to take the opportunity to learn how to install packages manually, and to my surprise, it was not nearly as complex as I had feared. I had promised a series of podcasts along the theme, "So, you've just installed Arch Linux, now what?" This may seem like I've jumped ahead a couple steps, but I wanted to bring it to you while it was fresh in my mind.

Your first step may be to ensure you really have to resort to the Arch User Repositories to install the app you are looking for. I'd found Doc Viewer allowed me to access .PDFs in Arch, but I really preferred Okular thatI'dusedin other distros. When 'sudo pacman -S okular' failed to find the package, I assumed it was only available from the AUR. However, a Google search on [ arch install okular ] revealed the package I needed was kdegraphics-okular , which I installed from the standard Arch repos.

Once you've determined the package you need exists in the AURandnot in the standard repos, you need to locate the appropriate package build, your Google search will probably take care of that. The URL should be in the formhttp:aur.archlinux.org/packages/<package-name>. For the sake of example, lets go tohttps:aur.archlinux.org/packages/google-chrome/. Chromium is already in standard Arch repos, but if you want Chrome, you will have to find it in the AUR. Find the link labeled "Download the tarball", it will be a file ending ing .tar.gz Before downloading a file, the Arch Wiki instructions for manually installing packages from the AURhttp://wiki.archlinux.org/index.php/Arch_User_Repositoryrecommend creating a designated folder to put them in, they suggest creating a "builds" folder in your home directory.

If you have a multi-core machine, you may be able to take advantage of a slight compiler performance increase by making adjustments to your /etc/makepkg.conf . Look for CFLAGS=", it should have a first parameter that look like -march=x86_64 or -march=i686 . Which ever it is, "change it to -march=native and eliminate the second parameter that reads -mtune=generic . This will cause gcc to autdetect your processor type.Edit the next line, which begins with "CXXFLAGS", to read CXXFLAGS="${CFLAGS}", the just causes the CXXFLAGS setting to echo CFLAGS.Details are located inhttp://wiki.archlinux.org/index.php/Makepkg.conf.

Before installing your first AUR package, you will have to install base-devel, [ pacman -S base-devel , {as root, so become root or use sudo}]. Look for that .tar.gz file you downloaded, still using Chrome as an example, it's google-chrome.tar.gz . Unravel the tarball with "tar -xvzf google-chrome.tar.gz". Now, in your ~/builds folder you should have a new directory named "google-chrome". Drop down into the new folder. Since user repos are not as trusted as the standard ones, it might be a good idea to open PKGBUILD and look for malicious Bash instructions. Do the same with the .install file. Build the new package with "make -s". The "-s" switch lets the compiler resolve any unmet dependencies by prompting you for the your sudo password.

You will have a new tarball in the format of <application name>-<application version number>-<package revision number>-<architecture>.pkg.tar.xz , in our google-chrome example, the file name was google-chrome-27.0.1453.110-1-x86_64.pkg.tar.xz . We install it with pacman's upgrade function "pacman -U google-chrome-27.0.1453.110-1-x86_64.pkg.tar.xz". This command will install the new package and create an RPM.

Before running Arch, I did not realize spell checking was centrally configured in Linux, I always assumed each application had it's own spell checker. After installing Arch, I noticed auto-correct wasn't working anywhere. At length, I looked for a solution. I found Libre Office and most browsers rely on hunspell for spell checking functions. To get it working, you just need to install hunspell and the hunspell library appropriate for you language, i.e. "pacman -S hunspell hunspell-en"

StraightTalk/Tracphone, a quick review.

Before leaving for Philadelphia last spring, I decided I needed a cheap smartphone on a prepaid plan. The only one with reliable service in my area is StraightTalk, or Tracphone, sold in Walmart. For $35 a month, they advertise unlimited unlimited data, talk, and text. The one drawback, any form of tethering, wired or wireless, violates StraightTalk's TOS (frankly I missed that condition before buying the phone). Hmm, would Chromecast count? Anyway, for some people, no tethering would be an immediate deal breaker. Frankly, I can see the advantages to tethering, but the one scenario I'm most interested in is isolating an infected system from a customer's network, and still be able to access anti malware resources. The budget phone I bought only supports 3G, and I'm not in the habit of streaming media to it, much less sharing it to another device.

That doesn't mean I don't use the bandwidth. I put a 16 gig SD card in my phone, and started using it as an additional pipeline to download Linux iso's. Anything I download, I can transfer to my network with ES File Explorer. I downloaded several Gigs in the first month to test the meaning of Unlimited. Towards the end of the month, and after I bought prepaid card for the next month, I had an off and on again data connection, I thought the provider was punishing me for being a hog, it turns out the phone was glitchy, and turning it off and back on again always re-establishes the data connection. Therefore, I am happy to report that StraightTalk actually seems to mean what they say when they advertise "Unlimited". Unfortunately, many of my direct downloads fail md5sum check. Direct downloads on 3G come down as fast as 75-100 MBps, but torrents seem to top out at 45MBps, the same as my home connection.


1290 - MultiSystem: The Bootable Thumb Drive Creator | 2013-07-12

MultiSystem is a tool for creating bootable USB thumb drives that give you the option launching multiple ISO images and other built in diagnostic utilities. It can be an invaluable tool for system repair techs. Not to mention the many recovery and repair Live CDs that are available to fix Linux, most bootable Windows repair and anti-virus utilities run from a Linux based ISO. The tech can even create ISO images of Windows installation media and replace a stack of DVDs with one thumb drive. Besides the installable package, there is also a MultiSystem LiveCD http://sourceforge.net/projects/multisystem/ that, if I understand correctly, contains some recomended ISOs to install on your thumb drive.

MultiSystem Icon

For complete episode show notes please see
http://hackerpublicradio.org/eps/hpr1290.html


1228 - Utilizing Maximum Space on a Cloned BTRFS Partition | 2013-04-17

Utilizing Maximum Space on a Cloned BTRFS Partition

by FiftyOneFifty

  1. If you clone a disk to a disk, Clonezilla will increase (decrease) the size of each partition proportional to the relative size of the drives.
    1. I wanted to keep my / the same size and have no swap (new drive was SSD), so I did a partion to partion clone instead
    2. Created partions on the new SSDs with a GParted Live CD, 12Gb root (Ext4) and the remained for /home, (btrfs, because I planned to move to SSD from the start, and last summer only btrfs supported TRIM)
  2. After cloning /dev/sda1 to /dev/sdb1 and /dev/sda2 to /dev/sdb2 using Clonezilla, I inspected the new volumes with the GParted Live CD
    1. /dev/sdb2 had 40% unaccessable space, i.e., the usable space was the same size as the old /home volume
    2. GParted flagged the error and said I could correct it from the menu (Partition->Check) but btrfs doesn't support fschk, so it didn't work
    3. Tried shrinking the volume in GParted and re-expanding it to take up the free space, also didn't work.
  3. Discovered 'btrfs utility' and that it was supported by the GParted Live CD
    1. Make a mount point
      1. sudo mkdir /media/btrfs
    2. Mount the btrfs volume
      1. sudo mount /dev/sdb2 /media/btrfs
    3. Use btrfs utility to expand the btrfs file system to the maximun size of the volume
      1. sudo btrfs filesystem resize max /media/btrfs
    4. Unmount the btrfs volume
      1. sudo umount /dev/sdb2
  4. Rechecked /dev/sdb2 with GParted, I no longer had unaccessible space


1224 - Podio Book Report on Jake Bible's "Dead Mech" | 2013-04-11

In today's show FiftyOneFifty shares his review of the PodioBook by Jake Bible's "Dead Mech" and Reflections Upon Podcasting from the Bottom of a Well
http://podiobooks.com/title/dead-mech/
http://jakebible.com/

1220 - Cinnarch 64 bit, Installation Review | 2013-04-05

Howdy folks, this is FiftyOneFifty, and today I wanted to talk about my experiences installing the 64 bit version of Cinnarch net edition on a dual core notebook. Cinnarch of course is a relatively new Arch based distro running the Cinnamon fork of Gnome. I had previously installed Arch proper on this notebook, but when I rebooted to the hard drive, I lost the Ethernet connection. This is not uncommon, but there the notebook sat while until I had time to work the problem. I wanted to start using the notebook, and I'd heard good things about Cinnarch, so it seemed like a simple solution. I went into knowing Cinnarch was in alpha, so i shouldn't have been surprised when an update broke the system less then a week after the install, but that comes later in my story.

Complete show notes are available here: http://hackerpublicradio.org/eps/hpr1220/index.html


1194 - Copying a Printer Definition File Between Systems | 2013-02-28


I recently learned where Linux stores the PPD created when you set up a printer and how to copy it between PCs.  I'd like to briefly share that information with you.

This is how to copy a printer definition file (equivalent of a printer driver) from a system where the printer is already configured to another system that you want to be able to access the same printer.  Reasons you might need to do this:

a.  The normal CUPS (Common Unified Printing System) set up doesn't have the right definition file for your printer.  In rare instances, you might have to download a ppd from the manufacturer or another source.  If so, copying the ppd may be easier than downloading it again.

b.  You configure CUPS and find there are no pre-provided printer drivers.  I thought this was the case when I first tried to configure CUPS under Linaro on my ODroidX.   For all intents and purposes, Linaro is an Arm port of mainline Ubuntu (Unity included).  I installed CUPS via Aptitude and tried to configure a printer as I would on any Linux system.  When I got to printer selection, the dropdown to select a manufacturer (the next step would be to choose a model) was greyed out, as was the field to enter a path to a ppd file.  I closed the browser and tried again, and the same thing happened.  This is what prompted me to find out where to find a PPD file on another system and copy it.  I never got to see how it would work, because when I had the ppd file copied over and ready to install, the manufactures and models in CUPS were already populated.  There had bee an update between my first and second attempts to configure CUPS on the ODroidX, but I'd rather say it was a glitch the first time, instead of the ppd's suddenly showing up in the repo.

c.  When I installed Arch on another system, I found there was far less options for choosing models, in my instance, there was only one selection for HP Deskjets.  I suspect borrowing the model specific ppd from another distro will increase the functionality of the printer.

Copying the ppd

1.  On the computer where the printer is already configured, find the .ppd (Postscript Printer Definition) file you generated (filename will be the same as the printer name) in /etc/cups/ppd/model (or possibly just /etc/cups/ppd, neither my ODroidX or my Fedora laptop have the "model" folder).
2. Copy to your home folder on the new system (You can't place the file in it's final destination yet, unless you are remoted in as root)
3. According to the post I found on LinuxQuestions.org, CUPS looks for a GZipped file [ gzip -c myprinter.ppd > myprinter.ppd.gz ; the '-c' arguement creates a new file, rather than gzipping the old one, and you use redirection to generate the new file.]  Recall that I never got to try this, because when I re-ran CUPS, the printer selections were already populated. 
4. Copy the archived file to /etc/cups/ppd/model on the machine that needs the printer driver

Configure CUPS (IP Printer)
1. Open localhost:631 in a browser
2. Click Administration tab
3. Click "Add a Printer" button
4. Log in as an account with root priviledges
5. For Ethernet printers, select "AppSocket/HP JetDirect" button and click "Continue"
6. From the examples presented, " socket://PRINT_SERVER_IP_ADDRESS:9100  " works for me, click continue
7. On the next page, fill in a printer name, this will be the file name for the PPD generated as well as how the printer is labled in the printer select dialog.  The other fields are optional.  Click continue.
8. (I am assuming if the LinuxQuestions post was right, CUPS will find the gz file and show the manuafacturer and model as options) From the list, select a manufacturer, or input the path to your PPD file
9. Select the printer model
9a.I think you could copy over the ppd as is and type the path to it in the field where it asks for a ppd file. 
10.Modify or accept the default printer settings

Or just copy the ppd and compare the settings in /etc/cups/printers.conf


1167 - Kernels in the Boot, or What to Do When Your /boot folder Fills Up | 2013-01-22

Synopsis of the Problem


You may hae heard me mention that I purchased a used rack server a couple years ago to help teach myself Linux server administration.  It's an HP DL-380 G3 with dual single core Zeons and 12Gb of RAM.  It came with two 75Gb SCSI drives in RAID1, dedicated to the OS.  Since the seller wanted more for additional internal SCSI drives, and those old server drives are limited to 120Gb anyway, I plugged in a PCI-X SATA adapter and connected  750Gb drive externally and mounted it as /home.  I moved over the 2Gb USB drive I had on my Chumby (as opposed to transferring the files) and it shows up as /media/usb0.  I installed Ubuntu server 10.04 (recently updated to 12.04) because CentOS didn't support the RAID controller out of the box and I had frustrated the lack of support for up to date packages on Debian Lenny on the desktop.

With 75Gb dedicated to the OS and application packages, you can imaging my surprise when after a update and upgrade, I had a report that my /boot was full.  It was until I look at the output from fdisk that I remembered the Ubuntu installer created a separate partition for /boot.  At the risk of oversimplifying the purpose of /boot, it is where your current and past kernel files are stored.  Unless the system removes older kernels (most desktop systems seem to) the storage required for /boot will increase with every kernel upgrade.

This is the output of df before culling the kernels

Filesystem              1K-blocks      Used  Available Use% Mounted on
/dev/mapper/oriac-root   66860688   6593460   56870828  11% /
udev                               6072216           4    6072212   1% /dev
tmpfs                              2432376       516    2431860   1% /run
none                               5120                 0       5120       0% /run/lock
none                              6080936            0    6080936    0% /run/shm
cgroup                           6080936            0    6080936   0% /sys/fs/cgroup
/dev/cciss/c0d0p1          233191    224953          0         100% /boot
/dev/sda1                       721075720 297668900  386778220  44% /home
/dev/sdb1                     1921902868 429219096 1395056772  24% /media/usb0

This directory listing shows I had many old kernels in /boot

abi-2.6.32-24-generic-pae
abi-2.6.32-35-generic-pae
abi-2.6.32-36-generic-pae
abi-2.6.32-37-generic-pae
abi-2.6.32-38-generic-pae
abi-2.6.32-39-generic-pae
abi-2.6.32-40-generic-pae
abi-2.6.32-41-generic-pae
abi-2.6.32-42-generic-pae
abi-3.2.0-29-generic-pae
abi-3.2.0-30-generic-pae
abi-3.2.0-31-generic-pae
abi-3.2.0-32-generic-pae
config-2.6.32-24-generic-pae
config-2.6.32-35-generic-pae
config-2.6.32-36-generic-pae
config-2.6.32-37-generic-pae
config-2.6.32-38-generic-pae
config-2.6.32-39-generic-pae
config-2.6.32-40-generic-pae
config-2.6.32-41-generic-pae
config-2.6.32-42-generic-pae
config-3.2.0-29-generic-pae
config-3.2.0-30-generic-pae
config-3.2.0-31-generic-pae
config-3.2.0-32-generic-pae
grub
initrd.img-2.6.32-24-generic-pae
initrd.img-2.6.32-35-generic-pae
initrd.img-2.6.32-36-generic-pae
initrd.img-2.6.32-37-generic-pae
initrd.img-2.6.32-38-generic-pae
initrd.img-2.6.32-39-generic-pae
initrd.img-2.6.32-40-generic-pae
initrd.img-2.6.32-41-generic-pae
initrd.img-2.6.32-42-generic-pae
initrd.img-3.2.0-29-generic-pae
initrd.img-3.2.0-30-generic-pae
initrd.img-3.2.0-31-generic-pae
lost+found
memtest86+.bin
memtest86+_multiboot.bin
System.map-2.6.32-24-generic-pae
System.map-2.6.32-35-generic-pae
System.map-2.6.32-36-generic-pae
System.map-2.6.32-37-generic-pae
System.map-2.6.32-38-generic-pae
System.map-2.6.32-39-generic-pae
System.map-2.6.32-40-generic-pae
System.map-2.6.32-41-generic-pae
System.map-2.6.32-42-generic-pae
System.map-3.2.0-29-generic-pae
System.map-3.2.0-30-generic-pae
System.map-3.2.0-31-generic-pae
System.map-3.2.0-32-generic-pae
vmcoreinfo-2.6.32-24-generic-pae
vmcoreinfo-2.6.32-35-generic-pae
vmcoreinfo-2.6.32-36-generic-pae
vmcoreinfo-2.6.32-37-generic-pae
vmcoreinfo-2.6.32-38-generic-pae
vmcoreinfo-2.6.32-39-generic-pae
vmcoreinfo-2.6.32-40-generic-pae
vmcoreinfo-2.6.32-41-generic-pae
vmcoreinfo-2.6.32-42-generic-pae
vmlinuz-2.6.32-24-generic-pae
vmlinuz-2.6.32-35-generic-pae
vmlinuz-2.6.32-36-generic-pae
vmlinuz-2.6.32-37-generic-pae
vmlinuz-2.6.32-38-generic-pae
vmlinuz-2.6.32-39-generic-pae
vmlinuz-2.6.32-40-generic-pae
vmlinuz-2.6.32-41-generic-pae
vmlinuz-2.6.32-42-generic-pae
vmlinuz-3.2.0-29-generic-pae
vmlinuz-3.2.0-30-generic-pae
vmlinuz-3.2.0-31-generic-pae
vmlinuz-3.2.0-32-generic-pae

The Solution I Found

I ran accross some articles that suggested I could use 'uname -r' to identify my current running kernel (3.2.0-31, the -32 apparently kernel ran out of space before it completed installing) and just delete the files with other numbers.  That didn't seem prudent, and fortunately I've found what seems to be a more elegant solution on upubuntu.com .
http://www.upubuntu.com/2011/11/how-to-remove-unused-old-kernels-on.html



Verify your current running kernel


uname -r

Linux will often keep older kernels so that you can boot into and older version from Grub (at least on a desktop).  Fedora has an enviroment setting to tell the OS just how many old kernels you want to maintain [installonly_limit in /etc/yum.conf].  Please leave a comment if you know of an analog in Debian/Ubuntu. 

List the kernels currently installed on you system. 


dpkg --list | grep linux-image

Cull all the kernels but the current one

The next line is the key, make sure you copy and paste exaclty from the shownotes.  I'm not much good with regular expressions, but I can see it's trying to match all the packages starting with 'linux-image' but containing a number string different from the one returned by 'uname -r', and remove those packages.  Obviously, this specific commandwill only work with Debian/Ubuntu systems, but you shoild be able to adapt it to your distro.  The '-P' is my contribution, so you can see what packages you are eliminating before the change becomes final.

sudo aptitude -P purge ~ilinux-image-\[0-9\]\(\!`uname -r`\)

Make sure Grub reflects your changes

Finally, the author recomends running 'sudo update-grub2'  to make sure Grub reflects your current kernel status (the above command sees to do this after every operation anyway, but better safe than sorry.

It's worth noting I still don't have my -32 kernel update, so I'll let you know if the is anything reqired to get kernel updatesget started again.

My df now shows 14%  usage in /boot and a directory listing on /boot only  shows the current kernel files.

Filesystem              1K-blocks      Used  Available Use% Mounted on
/dev/mapper/oriac-root   66860688   5405996   58058292   9% /
udev                      6072216        12    6072204   1% /dev
tmpfs                     2432376       516    2431860   1% /run
none                         5120         0       5120   0% /run/lock
none                      6080936         0    6080936   0% /run/shm
cgroup                    6080936         0    6080936   0% /sys/fs/cgroup
/dev/cciss/c0d0p1          233191     29321     191429  14% /boot
/dev/sda1               721075720 297668908  386778212  44% /home
/dev/sdb1              1921902868 429219096 1395056772  24% /media/usb0



abi-3.2.0-31-generic-pae
config-3.2.0-31-generic-pae
grub
initrd.img-3.2.0-31-generic-pae
lost+found
memtest86+.bin
memtest86+_multiboot.bin
System.map-3.2.0-31-generic-pae
vmlinuz-3.2.0-31-generic-pae

1113 - TermDuckEn aptsh - screen - guake | 2012-11-07

I recently discovered apt shell (aptsh), a psuedo shell which gives users of distributions which use apt for package management quick access to the functionality of apt-get. You should find aptsh in the repositories of Debian based distros. Once installed, you can launch 'aptsh' as root from the command prompt (i.e. 'sudo aptsh').


One of the drawbacks of installing software from the terminal is that sometimes you don't know the exact name of the package you want to install. From the aptsh> prompt, 'ls' plus a search string will show all the packages that have that string in their names. You can type 'install' plus a partial package name and use TAB completion to finish the instruction. The function of the 'update' and 'upgrade' commands are self explanatory, unfortunately, you can't string them together on the same line like you can in bash:


sudo apt-get update && sudo apt-get -y safe-upgrade


Instead, you use the backtick [ ` ] key to put aptsh into queue mode. In queue mode, you can enter commands one by one to be launched in sequence at a later time. To bring your system up to date, you could run:


aptsh> `

* aptsh> update

* aptsh> upgrade

* aptsh> `

aptsh> queue-commit-say yes


Backtick toggles queue entry, and queue-commit runs the queue. “queue-commit-say y” tells aptsh to answer in the affirmative to any queries from the commands executed in the queue in much the same way “apt-get -y safe-upgrade” confirms software updates without user interaction. Apt shell is capable of other apt related tasks, but I think I've covered the most useful ones.


The trouble with running aptsh is that unless you start it in a terminal with the computer and leave it running all day (as opposed to opening it as a new shell within you terminal every time you want to update or install), despite the convienience of package name search and TAB completion, it really won't save you any keystrokes. With that in mind, I started looking for ways to have the apt shell available at a keystroke (we will leave the wisdom of leaving a shell open with a subset of root privileges for another day). I had guake installed, but rarely used it because I usually have multiple terminal tabs open since I am logged into my server remotely. [Actually, I had forgotten guake supports tabbed terminals quite well. You can open a new tab with <Shift><Ctrl>T and switch between terminal tabs by <Ctrl><PgUp> and <Ctrl><PgDn> or clicking buttons that appear at the bottom of the guake window. I had how, forgotten this until doing further research on this story. Since this revelation ruins my story, we will forget about tabbed terminal support in guake and not mention it again.]


I am also going to assume everyone is familiar with guake. If not, suffice it to say guake is a terminal that pops down in the top third of the screen when you hit a hotkey, <F12> being the default. It returns to the background when you press <F12> again or click the lower part of the desktop. It is patterned after the command shell in the game Quake that let you input diagnostic and cheat codes, hence the name. Since I wasn't using guake as a terminal anyway, I wanted to see if I could make it run apt shell by default. I found you can access guake's graphical configuration manager by right clicking inside the open terminal and selecting preferences.


On the first preferences tab, I found “command interpreter”, but since aptsh is only a pseudo shell, it isn't found in the dropdown list. However, one option was “screen”, which would give me a way to run multiple terminals that I thought guake lacked. Next, I had to look up how to configure screen. I figured there must be a way to make screen run aptsh in one session by default, and I found it. In the show notes I've included my .screenrc file from my home folder, which I make with the help of this article from the online Red Hat Magazine:

http://magazine.redhat.com/2007/09/27/a-guide-to-gnu-screen/


**


hardstatus alwayslastline

hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

# Default screens

screen -t shell1 0

screen -t apt-shell 1 sudo aptsh

screen -t server 2 ssh 5150server

screen -t laptop 3 ssh 5150@Redbook


**


The first two lines set up the screen status line, the first puts it at the bottom of the terminal, the second sets up the status line to display the hostname and date, and an indicator that highlights which screen windows you are looking at. The # Default screens section below sets up sessions screen opens by default. The first line opens up a regular terminal named “shell1” and assigns it to session zero. The second opens a window called “apt-shell” (this is how it's identified on the status line) and launches apt shell. The last two log me into my server (host name aliasing made possible by configuring my homefolder/.ssh/config , thanks Ken Fallon) and my laptop running Fedora respectively. I still have to cycle through your screen windows and type in my passwords for sudo and ssh. The configuration could be set up to launch any bash command or script by default. The cited article doesn't include any more configuration tips, but I'm certain there are ways to set up other options, such as split windows by default.


Since I also run screen on my remote connection to my server, I have to remember the command prefix is <Crtl>a,a. Ergo, if I want to move to the next window in the screen session (running under guake) on the local PC, the command is <Ctrl>a, then n. To go to the next screen window in the screen session on my server, running inside another screen session on my local PC, it's <Ctrl>a,a,n.


So, that's how I learned to run apt shell inside screen inside guake. I can be contacted at FiftyOneFifty@linuxbasement.com or by using the contact form on TheBigRedSwitch.DrupalGardens.Com


1106 - Of Fuduntu, RescaTux (or the Farmer Buys a Dell) | 2012-10-29

This is another one of my How I Did It Podcasts (or How I Done It if you rather) where my goal is to pass along the things I learn as a common Linux user administering my home computers and network, and engaging in the types of software tinkering that appeals to our sort of enthusiast.


I'd been thinking for a while about replacing the small computer on my dinner table. I had been using an old HP TC1000, one of the original active stylus Windows tablets, of course now upgraded to Linux. With the snap in keyboard, it had a form factor similar to a netbook, with the advantage that all the vulnerable components were behind the LCD, up off the table and away from spills. It had served my purpose of staying connected to IRC during mealtimes, and occasional streaming of live casts, but I wanted more. I wanted to be able to join into Mumble while preparing meals, I wanted to be able to load any website I wanted without lockups, and I wanted to stream video content and watch DVDs.


I was concerned that putting a laptop on the table was an invitation to have any spilled beverage sucked right into the air intakes, and I never even considered a desktop system in the dining room until I saw a refurbished Dell Inspiron 745 on GearXS.com (I wouldn't normally plug a specific vendor, but now GearXS is putting Ubuntu on all it's used corporate castoff systems). This Dell had the form factor that is ubiquitous in point-of-sale, a vertical skeleton frame with a micro system case on one side and a 17” LCD on the other, placing all the electronics several inches above the surface on which it is placed. I even found a turntable intended for small TVs that lets me smoothly rotate the monitor to either at my place on the table or back towards the kitchen where I am cooking. I already had a sealed membrane keyboard with an integrated pointer and wireless-N USB dongle to complete the package. Shipped, my “new” dual core 2.8Ghz Pentium D system with 80Gb hard drive and Intel graphics was under $150. [The turntable was $20 and an upgrade from 1Gb to 4Gb of used DDR2 was $30, but both were worth it.] Since the box shipped with Ubuntu, I thought installing the distro of my choice would be of no consequence, and that is where my tale begins.


I'm going to start my story towards the end, as it is the most important part. After the installation of four Linux distros in as many days (counting the Ubuntu 10.04 LTS the box shipped with, a partial installation of SolusOS 2r5, Fuduntu and finally Lubuntu 12.04), I discovered I couldn't boot due to Grub corruption (machine POSTed, but where I should have seen Grub, I got a blank screen with a cursor in the upper left corner).


A. I thought I would do a total disk wipe and start over, but DBAN from the UBCD for Windows said it wasn't able to write to the drive (never seen that before)

B. Started downloading the latest RescaTux ISO. Meanwhile, I found an article that told me I could repair Grub with a Ubuntu CD https://ubuntunigeria.wordpress.com/2010/09/02/how-to-restore-grub2-using-an-ubuntu-live-cd-or-thumb-drive/ , so I tried booting from the Lubuntu 12.04 CD (using the boot device selector built into the hardware). Same black screen, preceded by a message that the boot device I had selected was not present. Same thing with the Fuduntu DVD that had worked the day before. With the exception of UBCD, I couldn't get a live CD to boot.

C. Now having downloaded the RescaTux ISO, and suspecting a problem with the optical drive, I used Unetbootin to make a RescaTux bootable thumb drive. RescaTux

( http://download2.berlios.de/rescatux/rescatux_cdrom_usb_hybrid_i386_486-amd64_0.30b7_sg2d.iso ) has a pre-boot menu that let's you choose between 32 and 64 bit images, but that was as far as I got, nothing happened when I made my selection.

D. At this point, I am suspecting a hardware failure that just happened to coincide with my last install. This is a Ultra Small Form Factor Dell, the kind you see as point of sale or hospital systems, so there weren't many components I could swap out. I didn't have any DDR2 laying around, but I did test each of the two sticks the system came with separately with the same results. I then reasoned a Grub error should go away if disabled the hard drive, so I physically disconnected the drive and disabled the SATA connector in the BIOS. I still couldn't boot to a live CD. Deciding there was a reason his machine was on the secondary market, I hooked everything back up and reset the BIOS settings to the defaults, still no luck.

E. As a Hail Mary the next day, I burned the RescaTux ISO to a CD and hooked up and external USB optical drive. This time, I booted to the Live CD, did the two step grub repair, and when I unplugged the external drive, I was able to boot right into my Lubuntu install. Now booting to Live CDs from the original optical drive and from the thumb drive worked. RescaTux FTW.


Now a little bit on how I got in this mess. As I said, the Dell shipped with 10.04, but I wanted something less pedestrian than Ubuntu (ironic I wound up there anyway). I tried Hybride, but once again, like my trial on the P4 I mentioned on LinuxBasix, the Live CD booted, but the icons never appeared on the desktop (I think it's a memory thing, the Dell only shipped with a gig, shared with the integrated video). After Hybride, I really wanted to be one of the cool kids and run SolusOS, but the install hung twice transferring boot/initrd.img-3.3.6-solusos. I casted around for a 64bit ISO I had on hand, and remembered I'd really wanted to give Fuduntu a try. Fuduntu is a rolling release fork of Fedora, with a Gnome 2 desktop, except that the bottom bar is replaced with a Mac style dock, replete with bouncy icons (cute at first,but I could tell right away they would get on my nerves). However, I found I liked the distro, despite the fact I found the default software choices a little light for a 900Mb download (Google Office, Chromium, no Firefox, no Gimp). Worst of all, no Mumble in the repos at all (really Fuduntu guys? While trying to install Mumble, do you know how many reviews I found that can be summed up as "Fuduntu is great, but why is there no Mumble?"). Unfortunately, I put Mumble on the back burner while I installed and configured my default set of comfort apps from the repos (Firefox, XChat, Gimp, VLC, LibreOffice, etc). [BTW, with the anticipated arrival of a 2.4ghz headset, I hope to be able to use the new machine to join the LUG/podcast while preparing and dare I say eating dinner.]


I visited the Mumble installation page on SourceForge, and found they no longer linked to .deb files and fedora .rpms, as they assume you can install from your repositories. Thinking someone must have found an easy solution, I hit Google. The best answer I found was a page on the Fuduntu forums (http://www.fuduntu.org/forum/viewtopic.php?f=21&t=2237 ), that suggested downloading the Mumble and a dozen prerequisite library .rpm's from a third party site called rpm.pbone.net. I visited pbone.net, and found when I looked up each library, I got a dozen different links to versions of the file. Then I saw a link that seemed to offer the promise of simplifying my task, if I subscribed to pbone.net, I could add their whole catalog as a repo. While researching the legitimacy of pbone.net, I found them mentioned in the same sentence as RPMFusion as an alternate repository for Fedora. I decided to install the RPMFusion repos as well, thinking I might find some of the needed libraries in there. I registered with pbone, and discovered I would only have access to their repository for 14 days free, after which it would cost $3 a month (after all, hosting such a service must cost money). I figured the free trial would at least get Mumble installed, and went through the set up. Among the questions I had to answer were which Fedora version I was running (I picked 17, since Fuduntu is rolling) and 32 or 64 bit. pbone.net generated a custom .repo file to place in my /etc/yum.repos.d directory. At this time, I'd already set up RPMFusion.


The fun started when I ran 'yum update'. I got "Error: Cannot find a valid baseurl for repo: rpmfusion-free". It turns out ( http://optics.csufresno.edu/~kriehn/fedora/fedora_files/f10/howto/repositories.html ) the location of the RPMFusion servers are usually commented out in the .repo files, Fedora must know where they are, but I guess Fuduntu does not. I uncommented each of the baseurl statements (there are three) in each of the RPMFusion .repo files (there are four files, free, non-free, free-testing, and non-free testing). I then re-ran 'yum update', this time I was told the paths for the RPMFusion baseurl's didn't exist. I opened up the path in a browser and confirmed it was indeed wrong. I pruned sub directories from the path one by one until I found a truncated url that actually existed on the RPMFusion FTP server. I looked at the .repo files again and figured out the paths referenced included global environment variables the were inconstant between Fedora and Fuduntu. For instance, $release in Fedora would return a value like 15, 16, or 17, where in Fuduntu it resolves to 2012. I figured if I took the time, I could walk up and down the FTP server and come up with literal paths to put in the RPMFusion .repo files, but instead I just moved the involved .repo files into another folder to be dealt with another day.


I again launched 'yum update'. This time had no errors, but I was getting an excessive amount of new files from my new pbone.net repo ('yum update' updates your sources and downloads changed files all in one operation). It's possible the rolling Fuduntu is closer Fedora 16, so when I told pbone.net I was running 17, all the files in the alternate repo were newer than what i had. In any case, I had no wish to be dependent of a repo I had to rent at $3 a month, so I canceled the operation, admitted defeat, and started downloading the 64bit version of Lubuntu. I know I said I would rather have a more challenging distro, but because of it's location, this needs to be a just works PC, not a hack on it for half a day box. I would have like to have given Mageia, Rosa, or PCLinuxOS a shot, but too many packages from outside the repos (case in point, Hulu Desktop) are only available in Debian and Fedora flavors. You know the rest, I installed Lubuntu, borked my Grub, loop back to the top of the page.


1101 - Recovery of an (en)crypted home directory in a buntu based system | 2012-10-22

Recovery of an (en)crypted home directory in a 'buntu based system

by 5150


This is going to be the archetypal “How I Did It” episode because if fulfills the criterion of dealing with an issue most listeners will most likely never have to resolve, but might be invaluable to those few who some day encounter the same problem, how to recover an encrypted home folder on an Ubuntu system.

I enabled home folder encryption on installation of a Linux Mint 8 system some years back and it never gave me trouble until the day that it did. Suddenly, my login would be accepted, but then I would come right back to GDM. Finally I dropped into a text console to try to recover the contents of my home folder, and instead found two files, Access-Your-Private-Data.desktop and README.txt . README.txt explained that I had arrived in my current predicament because my user login and password for some reason were no longer decrypting my home folder (Ubuntu home folder encryption is tied to your login, no additional password is required). Honestly, until I lost access to my files, I 'd forgotten that I'd opted for encryption. I found two articles that described similar methods of recovery. I'd tried that following their instructions and failed, probably because I was mixing and matching what seemed to be the easiest steps to implement from the two articles. When I took another look at the material weeks later, I discovered I missed a link in the comments that led me to an improved method added at Ubuntu 11.04 that saves several steps: http://blog.dustinkirkland.com/2011/04/introducing-ecryptfs-recover-private.html

  1. Boot to an Ubuntu distribution CD (11.04 or later)

  2. Create a mount point and mount the hard drive. Of course, if you configured you drive(s) with multiple data partitions (root, /home, etc) you would have to mount each separately to recover all the contents of your drive, but you only have to worry about decrypting your home directory. If you use LVM, and your home directory spans several physical drives or logical partitions, I suspect things could get interesting.

    1. $sudo mkdir /media/myhd

      1. /media is owned by root, so modifying it requires elevation

    2. You need to confirm how your hardrive is registered with the OS. I just ran Disk Utility and confirmed that my hard drive was parked at /dev/sda, that meant that my single data partition would be at /dev/sda1

    3. $sudo mount /dev/sda1 /media/myhd

    4. Do a list on /media/myhd to confirm the drive is mounted

      1. $ls /media/myhd

    5. The new recovery command eliminates the need to re-create your old user

      1. $sudo ecryptfs-recover-private (yes, ecrypt not encrypt)

      2. You will have to wait a few minutes while the OS searches your hard drive for encrypted folders

        1. When a folder is found, you will see

          INFO: Found [/media/myhd/home/.ecryptfs/username/.Private].

          Try to recover this directory? [Y/n]

          • Respond “Y”

        2. You will be prompted for you old password

        3. You should see a message saying your data was mounted read only at

          /tmp/ecryptfs.{SomeStringOfCharacters}

          • I missed the mount point at first, I was look for my files in /media/myhd/home/myusername

    6. If you try to list the files in /tmp/ecryptfs.{SomeStringOfCharacters}, you will get a “Permission Denied” error. This because your old user owns these files, not your distribution CD login

      1. [You will probably want to copy “/tmp/ecryptfs.{SomeStringOfCharacters}” into your terminal buffer as you will need to reference it in commands. You can select if with your mouse in the “Success” message and copy it with <Ctrl><Alt>c, paste it later with <Ctrl><Alt>v

      2. I tried to take ownership of /tmp/ecryptfs.{SomeStringOfCharacters}, I should have thought that would have worked.

        1. From my command prompt, I can see my user name is “ubuntu”

        2. $ sudo chown -R ubuntu /tmp/ecryptfs.{SomeStringOfCharacters}

          • -R takes ownership of subdirectories recursively

          • It's a good time to get a cup of coffee

    7. Next, we need to copy the files in our home directory to another location, I used an external USB drive (it was automounted under /media when I plugged it in). If you had space on the original hard drive, I suppose you could create a new user and copy the files to the new home folder. I decided to take the opportunity to upgrade my distro. Some of the recovered files will wind up on my server and some on my newer laptop.

      1. One could run Ubuntu's default file manager as root by issuing “sudo nautilus &” from the command line (the “&” sends the process to the background so you can get your terminal prompt back)

        1. Before copying, be sure to enable “View Hidden Files” so the configuration files and directories in you home directory will be recovered as well. As I said, there are select configuration files and scripts in /etc I will want to grab as well.

      2. I had trouble with Nautilus stopping on a file it couldn't copy, so I used cp from the terminal so the process wouldn't stop every time it needed additional input.

        1. $ cp -Rv /tmp/ecryptfs.{SomeStringOfCharacters} /media/USBDrive/Recovered

          • Of course the destination will depend on what you've named your USB drive and what folder (if any) you created to hold your recovered files

          • -Rv copies subdirectories recursively and verbosely, otherwise the drive activity light may be your only indication of progress. The cp command automatically copies hidden files as well.

          • Because of the file ownership difficulties, I could only copy the decrypted home folder in its entirety,

      3. I still had trouble with access do to to ownership once I detached the external drive and remounted it on my Fedora laptop, but I took care of that with:

        1. $ su -c 'chown -R mylogin/media/USBDrive/Recovered'


1094 - Linux, Beer, and Who Cares? | 2012-10-11

By BuyerBrown, RedDwarf, and FiftyOneFifty

This is a recording of an impromptu bull session that came about one night after BuyerBrown, RedDwarf, and I had been waiting around on Mumble for another host to join in. After giving up on recording our scheduled podcast, we stayed up for about an hour talking and drinking when Buyer suddenly asked Red and I to find current events articles concerning Linux. When that task was completed, Buyer announced he was launching a live audiocast over Mixlr.com with us as his guests. You are about to hear the result. Topics range from the prospects of Linux taking over the small business server market, now that Microsoft has retreated from the field, Android tablets and the future of the desktop in general, and the (at the time) revelation that Steam would be coming to Linux (on the last point, let me be the first to say that I am glad some of the concerns in my rant appear to be unfounded, apparently after a lot of work, Left for Dead 2 runs faster under Linux than it does under Windows with equivalent hardware. This podcast was recorded on a whim but I can't promise it won't happen again.


1000 - Episode 1000 | 2012-05-31

Hacker Public Radio commemorated it's 1000th episode by inviting listeners, contributors, and fellow podcasters to send in their thoughts and wishes of the occasion. The following voices contributed to this episode.

FiftyOneFifty, Chess Griffen, Claudio Miranda, Broam, Leo LaPorte and Dick DeBartolo, Dan Lynch, Becky and Phillip (Corenominal) Newborough, Dann Washko, Frank Bell, Jezra, Fabian Scherschel, k5tux, CafeNinja, imahuph, Johan Vervloet, Kevin Granade, Knightwise, MrX, NYBill, Quvmoh, pokey, MrGadgets, riddlebox, Saturday Morning Linux Review, Scott Sigler, Robert E. Wooden, Sigflup, BrocktonBob, Trevor Parsons, Ulises Manuel López Damián, Verbal, Ahuka, westoztux, Toby Meehan, Chris Garrett, winigo, Ken Fallon, Lord Draukenbleut, aukondk, Full Circle Podcast


0938 - Cloning Windows WiFi Profiles and Installing Skype Under 64-bit Fedora | 2012-03-06

The other day I was copying a customer's files and settings from a old laptop to a new one. Much of this tedious task was handled automatically by Fab's Autobackup (http://fpnet.fr/ , and 25% until Valentines Day BTW ), but I was disappointed that his dozen WiFi access point profiles and passwords were not one among the settings that Fab's copied for me. For a family laptop, you usually just have to re-enter the password for just the home router, and maybe once again for your work wireless. If your are a tech for an enterprise, and the new mobile workstation needs to connect to multiple access points, you always wind up walking around the business or campus, connecting to each in SSID in turn and entering a different key. This time, the laptop would be used in multiple remote offices. The user would have been able to re-create those connections as he traveled to each office, but he asked me if it wouldn't be possible instead to transfer the profiles with the rest of his data.

I had no doubt that I would be able to find a free tool to backup and restore wireless connections, but I have become wary of Windows utilities that can be found at the end of a Google search but have not been recommended by other techs or a trusted website. I was surprised to find my answer in some functions added to the DOS netsh, (or "net shell") command, starting with Windows Vista.

Open a Windows command prompt on the laptop that already has the WiFi keys set up, ergo the old one, and type:

netsh wlan show profiles

then press return. This will give you a list of your existing wireless connection profiles by name (i.e. by SSID). Now you can pick a WiFi profile name and enter on the command line:

netsh wlan export profile name="SSID_above_in_quotes" folder="C:\destination"

Quotes are required for the WiFi profile name, but not for the destination folder unless you use spaces in you Windows directory names. If you want to create export files for all your wireless connections, you may omit the "name=" part.

netsh wlan export profile folder=

Omitting "file=" of course creates export files in the current directory.

The netsh wlan export profile command generates a .XML export file for each selected profile. Each export file contains an SSID, channel, encryption type and a hash of the encryption key to be transferred to the new laptop, except that it doesn't work, at least not for me and several others who posted articles to the web. On my first try, I was able to import everything but the encryption key, all the access points showed up in "Manage Wireless Networks", but I was prompted for a key when I tried to connect. I thought maybe this was Microsoft's attempt at security, but I could see a field for the hash in the .XML and when I went back to the article on netsh and it was clear I was supposed to get the keys too. A little more googlsearch revealed a second article on netsh that gave me an argument the first one omitted, adding key=clear at the very end of the netsh command causes the keys to be exported in clear text! Our command now looks like:

netsh wlan export profile folder= key=clear

Copy your .XML profile files to the new laptop (I am assuming via USB key). The filenames will be in the format:

Wireless Network connection-.xml

You understood me correctly, this DOS command generates file names with spaces in them. Copy the .XML files to the new system and import the profiles with:

netsh add profile filename=".xml"

It's not quite as odious as it looks because DOS now supports TAB completion, so you just have to type:

netsh add profile filename="Wi and press

The rest of the name of the first profile will be filled in, complete with the terminating quote. Press and you should get a message that wireless profile has been imported. To import the remaining profiles, just use or the up arrow and edit the last command. Since it was set to auto-connect, the laptop I was working on made a connection to the local access point the instant the corresponding profile was imported.

Learning these new netsh functions may make configuring WiFi more convenient (I can maintain a library of wireless profiles for the organizations I service, or I could implement an encryption key update via a batch file). I can also see ominous security implications for networks where users aren't supposed to be privy to the connection keys and have access to pre-configured laptops, such as schools. One could whitelist the MAC addresses of only the organization's equipment, but there is always that visiting dignitary to whom you are expected to provide unfettered network access. Besides, anyone with access to the command line can use ipconfig to display the laptop's trusted MAC address, which can be cloned for access from the parking lot or from across the street. The only way I see to secure the connection from someone with physical access to a connected laptop is to install kiosk software that disables the command line.

Installing Skype on 64-bit Fedora

Last week I decided to install Skype as an alternative way to contact people with land lines. I haven't played with Skype since I had it on my Windows workstation, so I downloaded and installed the .rpm for Fedora 13+. All Skype has is a 32-bit package for Fedora, and sure enough, when I tried to launch Skype, the icon bounced around Compize fashion, then the application item on the taskbar closed without doing anything. I looked for information in troubleshooting Skype from the logs, and an Arch wiki article told me I might have to create ~/.Skype/Logs , which I did. The application continued to crash without generating a log. I heard someone mention once in a call in podcast that they'd had to perform additional steps to make 32-bit Skype work in 64 bit Fedora 15, and a Google search took me to the khAttAm blog (link below). I experienced some trepidation because the steps involve installing additional 32 bit libraries (if you heard me on the Hacker Public Radio New Years Eve shows, you might have heard me say I've experienced a bit of dependency hell over conflicts between 32 and 64 bit libraries) but the instructions in the article went flawlessly (I don't know if khattam.info represents one person or more than one, but you rock!). http://www.khattam.info/howto-install-skype-in-fedora-15-64-bit-2011-06-01.html

First, as root run yum update

Next, add the following line to /etc/rpm/macros (create it if it doesn't exist):

%_query_all_fmt %%{name}-%%{version}-%%{release}.%%{arch}


Finally, install these 32-bit libraries:

yum install qt.i686 qt-x11.i686 libXv.i686 libXScrnSaver.i686

After that, I was able to launch the application and log into my Skype account.


0594 - Using FFMPEG To Convert Video Shot With An Android Phone | 2010-11-11

This episode comes with detailed shownotes which can be found on the hpr site http://hackerpublicradio.org/shownotes/hpr0594.html

Become a Correspondent