BVLog Bryan Voss’ mental synchronization point

26Mar/080

tar over ssh

It's occasionally useful to copy a bunch of files from one server to another via ssh. There are various methods to accomplish this task, but one that I like to use is tar over ssh. Unfortunately, I don't use if often enough to remember all the appropriate switches offhand. I just had to use it this morning and had to search around to find the right info, so I'm posting it here for posterity.

tar cjvf - * | ssh username@remoteserver "(cd /target/dir ; tar xjvf -)"

23Aug/070

The case of the disappearing eth0

There have been a couple of occasions in the past week that I have lost an ethernet interface when swapping machines around. Looking back into my murky past, I can recall a couple of other times that I probably encountered the same issue. Don't recall how I resolved the issue before, but I have a definite solution now. I figured I should note it here so I can look it up later and so others can benefit from it.

Scenario 1: I build a Debian virtual machine using VMWare Workstation on my laptop. I later move the VM to a VMWare Server box. On first boot, VMWare Server asks if I want to assign a new UUID and I select yes. It turns out that the MAC address assigned to the virtual ethernet device is affiliated with the VMWare UUID. When the UUID changes, the MAC address changes. Debian assigns eth devices based on MAC address and therefore eth0 is lost after the MAC changes. The issue shows up when I try to start networking on the VM and eth0 doesn't come up.

Scenario 2: I install Debian on a PC-class box and tinker with it a while. It breaks (something to do with heat, probably a fan failure). I move the hard drive to an identical box and it boots fine, but eth0 doesn't come up. Same as above. Since Debian assigns the eth devices based on MAC address and the new ethernet device has a different MAC address, I get no eth0.

Solution: A comment on this post pointed me down the path of enlightenment.

/etc/udev/rules.d/z25_persistent-net.rules contains the MAC address to eth device mappings. Delete the lines like below, noting the module name on the "# PCI device" line:

# PCI device xxxxxx:xxxxxx ([module])
SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="xx:xx:xx:xx:xx:xx", NAME="eth0"

This removes the MAC to eth device mapping info. Now we need to restart udev to allow the change to take effect:

/etc/init.d/udev restart

Next step is to "bounce" the kernel module for the ethernet device. Use the module name from the z25_persistent-net.rules file noted above:

modprobe -r [module]
modprobe [module]

"ifconfig" should now show the eth0 interface as up and running. If not, try "ifup eth0" and check "ifconfig" again. That rascally ethernet interface can't hide for long!

Update 2009/03/11: This post details a method that does not require modifying individual VMs. Probably a better solution for template VMs or virtual appliances.

8May/070

diff ‘Linux traceroute’ ‘Windows tracert’

A coworker and I were trying to debug a remote connectivity issue between a Windows box connected via a Juniper SSL VPN to a RHEL box. We were able to do a tracert from a Windows box through the Juniper, but a traceroute under Linux would not work. We checked for iptables rules, routing, etc. to no avail.

I vaguely remembered something about a difference between Windows and Linux/UNIX traceroute. Something about one using ICMP and the other using UDP. I finally found in the traceroute man page: "-I Use ICMP ECHO instead of UDP datagrams." I slapped a -I argument in there and traceroute began to work.

My coworker said, "Well, I guess we just have to remember that Linux traceroute is buggy." Grrr. I said, "Or maybe it's Windows traceroute that's buggy and non-standard. [thoughtful pause] Although it does seem like ICMP would be the protocol to use for traceroute."

We talked about reading the RFCs for an answer, then shrugged and went on with our duties. It would be nice to be able to argue that one implementation of traceroute more closely adheres to the RFCs than another. Honestly though, outside the two of us having the discussion, there's probably nobody else in the office that even knows what an RFC is.

29Dec/060

Racks and cables and wireties, oh my!

I spent almost all day at work yesterday recabling an entire rack while it was running. The whole rack is devoted to our medical imaging (PACS) system, so it's very painful to schedule downtime.

We moved the rack to our new datacenter about a month or so ago. In the process, we pulled the rackmount UPSes out and connected directly to the central redundant UPS system. The rack was originally configured with one UPS per server (!), meaning the bottom half of the rack was filled with UPSes and the top with servers. We installed vertical PDUs in the back of the rack on either side and had to swap the power cables out on each server. Since we were already at the end of our scheduled downtime, we had to frantically get everything back up and running without cleaning up the rat's nest of cables in the back.

I was pulling a test server out of the rack yesterday and got into the back to disconnect it. It was such a mess that I decided to spend the time to clean it up. Thankfully, the power and network are redundant on that rack (we're slowly working towards doing this on all racks). The rack was staged by the vendor and shipped to us pre-cabled. I suspect that a large part of the expense involved in purchasing the system went towards all the cables and wireties jammed into the back of that thing. Why use a three foot cable when you can use a 10 foot? I was practically wading in snipped wireties by the time I got all the old cabling out.

Early in the process, I was merrily pulling cables when I heard a knock on the datacenter door. The rack I was working on just happened to be near the door, otherwise I never would have heard anything over the blasting fans in the room. I opened the door and there was our PACS admin (the guy who handles the clinical side of the medical imaging system). Apparently, I had disconnected the power cable on the database server's SCSI tray. Surprisingly, the database stopped responding. We spent 30 minutes bringing it back up and calling vendor support to make sure everything was back to normal.

After that mishap, I was much more careful about pulling cables. With the PDUs we're using, the power cables fall out at the slightest wiggle. The PDUs come with a bracket that you can add to wiretie the cables down. I hadn't mounted the brackets yet due to the time constraints we had when originally moving the rack. So, I spent the time yesterday mounting the brackets and tying everything down snugly.

When I finished all the cabling, I closed the rack and cleaned up the mess on the floor. I was about to roll my cart full of old cables out the door and leave for the day when I stopped. I went back to the rack, opened the back doors and just spent a couple of minutes admiring the niceness of all the clean cabling. With a sigh of satisfaction, I closed the doors and went on my way. It's those little moments of relishing a job well done that make the daily hassles worthwhile.

12Oct/060

NetBIOS Aliasing

Here’s a nifty trick I just learned. I have a Win2003 server at work that needs to respond to multiple NetBIOS names.

  • Regedit HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ lanmanserver \ parameters
  • Add OptionalNames REG_SZ and set to alias name
  • Restart Server service