Monday, November 5, 2012

Fixing Cron Jobs in Zimbra 8 on CentOS 6.3

In a previous post, I detailed some preliminary steps for installing Zimbra 8 GA on CentOS 6.3.  Today, I realized that my server status was not accurately reporting the status of various Zimbra services and server metrics.  A little digging around showed me that none of the Zimbra 8 cron jobs were running or properly scheduled.  Here is how I solved my problem:

First, install cronie:

# yum install cronie

This will provide us with a familiar crontab interface, and it will make it much easier to add the Zimbra 8 cron jobs to our system.  Next, we must remove exim so that we don't have another sendmail on our system to pollute our running process name space:

# rpm -e --nodeps exim

Leaving exim installed will conflict with Zimbra's MTA.  If the MTA fails to start, you might encounter the following errors:

[zimbra@zimbra ~]$ zmmtactl start
Rewriting configuration files...done.
postfix/postfix-script: warning: not owned by root: /opt/zimbra/postfix-2.10-20120422.2z/conf/main.cf
postfix/postfix-script: warning: not owned by root: /opt/zimbra/postfix-2.10-20120422.2z/conf/master.cf
postfix/postfix-script: warning: not owned by root: /opt/zimbra/postfix-2.10-20120422.2z/conf/master.cf.in
postfix/postfix-script: warning: not owned by root: /opt/zimbra/postfix-2.10-20120422.2z/conf/tag_as_foreign.re
postfix/postfix-script: warning: not owned by root: /opt/zimbra/postfix-2.10-20120422.2z/conf/tag_as_originating.re
 
Run the following commands as root:

# /opt/zimbra/libexec/zmfixperms --verbose --extended
# su - zimbra 
# zmmtactl restart

Next, navigate to the Zimbra 8 crontabs directory:

[root@zimbra crontabs]# pwd
/opt/zimbra/zimbramon/crontabs

From here, we have to concatenate all of our cron files into one easy to digest crontab:

[root@zimbra crontabs]# cat crontab >> crontab.zimbra
[root@zimbra crontabs]# cat crontab.ldap >> crontab.zimbra
[root@zimbra crontabs]# cat crontab.logger >> crontab.zimbra
[root@zimbra crontabs]# cat crontab.mta >> crontab.zimbra
[root@zimbra crontabs]# cat crontab.store >> crontab.zimbra

Finally, we use crontab to load the new crontab.zimbra file:

[root@zimbra crontabs]# crontab crontab.zimbra

A quick 'crontab -l' will show our current crontab, and after a few minutes, Zimbra 8 on CentOS 6.3 should be properly reporting server statistics, performing regular house keeping, and more.  There may be a better way to go about this, but I haven't found one as of yet.  Before putting this into production, please test thoroughly.  If I encounter any other issues, I will either update this post or create an entirely new post (depending on the scope of the problem).

Wednesday, October 10, 2012

Bulk package adder script for OpenBSD 5.1

Currently, I am in the process of learning OpenBSD 5.1 in addition to teaching myself Perl.  In particular, I want to play around with the Catalyst framework.  After doing some searching, I could not find an easy way to install a list of packages generated by redirecting pkg_info's output to a text file; so, I figured this is the perfect time to try my hand at writing a Perl script to do the job for me.  Yes, the script is a bit primitive, but I did learn a fair amount during its creation.

Naturally, after creating this script, I learned about 'pkg_add -l /path/to/listofpackages'.  That's OK - I learned some valuable skills in Perl that I can transfer to other applications.

Here is how I generated my list of packages to install for the Catalyst framework on Perl 5:

# pkg_info -Q p5-Catalyst > packages.txt

Here is the code for the script (which I plan to put on GitHub):

#!/usr/bin/perl -w

# Name:         pkgadder.pl
# Version:      1.0
# Revision:     October 10, 2012
# By:           Ramon J. Long III
# http://sysadminatlarge.blogspot.com

# This script will read in the contents of 'packages.txt'
# and use pkg_add to install each package.

# There are different ways to generate your package list.
# For exmple: `pkg_info -Q p5-Catalyst > packages.txt'
# This command will create a list of all Perl5 Catalyst packages.

# TO DO:
# Add command switch to add or remove packages.
# Add options for pkg_add and pkg_delete switches.

use strict;
use warnings;

# Open the package list to install:
open (FILE, "packages.txt") or die ("Unable to open file $!");

# Assign the file handle to an array:
my @openbsd_pkgs = <FILE>;

# Let the user know we are processing their package list:
print "Now processing the package list.  This may take a while...\n";

# Iterate over each package name in the array:
foreach (@openbsd_pkgs) {
       
        # Remove any new line characters and whitespace:
        chomp ($_);
       
        # Verbosely and interactively open the pkg_add command with progress meter
        # and pipe in our current package from the array:
        open my $fh, '-|', "pkg_add -v -i -m $_" or die "Can't open pipe: $!";
       
        # Execute the command:
        my @lines = <$fh>;
       
        # Close the command handle:
        close $fh or die "Can't close pipe $!";

}

# Close the file hand for packages.txt:
close (FILE);

# Return a true value in case this script eventually becomes a module:
1;

Friday, September 14, 2012

Zimbra 8 GA on CentOS 6.3

Much to the pleasure of many SysAdmins, Zimba 8 General Availability has been announced by VMware.  I have been planning on migrating to the Zimbra Open Source for my commercial email platform.  (I will migrate clients over using imapsync.)  Due to my upcoming Space Walk deployment, I decided early this year to standardize my department on CentOS.  Naturally, I figure it is time to update my CentOS VM template to 6.3, and give Zimbra 8 a test.  

When I setup my CentOS Linux Virtual Machines, I prefer to attach multiple SCSI vmdk files as my guest storage; usually 25 GB thin-provisioned disks.  Then, when building my template, I tend to place /boot and /swap on one disk, and use the rest of the space for LVM.  This allows me to quickly add more space to my Virtual Machine down the line without having to power down or restart my guest OS; a feature that is very important for capacity planning and meeting  my SLAs.  I also like to forgo setting resource reservations, and opt instead for resource limits.  My standard VM template has 4 GB RAM, 4x vCPUs, 4000 MHz CPU limit, and a 4096 MB (4 GB) RAM limit.  Also, installing VMware tools, while sometimes a pain (and outside scope of this blog), is beneficial for tighter management from vCenter.

After choosing the "basic server" packages from my 64 bit net install, I ran the following commands:

# yum install wget sudo sysstat libidn gmp libtool-ltdl compat-glib vixie-cron nc perl

Then, after using WinSCP to upload my zcs folder to /root, I modified the following permissions:

# chmod +x install.sh && chmod +x ./bin/get_plat_tag.sh 

I also discovered that Zimbra 8 expects libstdc++.so.6 in /usr/lib.  So, I made a symbolic link:

# ln -s /usr/lib64/libstdc++.so.6 /usr/lib/libstdc++.so.6

Then, I modified /etc/hosts with my public IP and FQDN of my new Zimbra host.  Also, be sure to setup a DNS MX record in your zone file (it is better to have DNS up to date before installing ZCS).

Now, since this edition of Zimbra is built for RHEL, we need to run the following command to install Zimbra on our host:

# ./install.sh --platform-override

That's it.  From here, it is up to the SysAdmin to configure iptables according to their internal policy, and to make any other changes to the system as needed (while following change control and management best practices; naturally).  The easiest Zimbra build to start with is the "all in one" server, and I refer the reader to the Zimbra Documentation, Wiki, and community forums for installation and configuration instructions.  You may also want to consider adding a commercial SSL certificate from your favorite vendor (if/when putting this host into production).  

One of the features of Zimbra that I love so much is the modular design - this makes building a Zimbra cluster(s) much simpler than products from some unnamed competitors.  =)  For more advanced deployments, I recommend the reader consult the resources listed above.

Tuesday, September 4, 2012

vSphere 4.0.0 and vmInventory.xml

Yesterday, one of my ESXi 4.0 hosts decided to seg fault.  Fortunately, this was during labor day when almost everyone (except us SysAdmins) was enjoying a hard earned day off.  Also, I am fortunate that no production services happened to be running on that host.

After bringing the server back up, I discovered a few items needed to be fixed:

  • I had a corrupted software iSCSI configuration.
  • I had a corrupted vSwitch (the one I am using for vMotions).
  • I now had a corrupted vmInventory.xml

As I have HA configured on the cluster, the VMs somehow evacuated the host, and while the running states were migrated over, the VMs now showed as orphaned in vCenter.  Shortly after that, the VMs disappeared entirely from the inventory.  Thankfully, all services somehow managed to stay up.  I could ping and SSH all day long on the VMs  that were supposedly down.

I used the instructions in the following VMware KB to attempt to repair vmInventory.xml:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007541

Unfortunately this did not work.  After a few hours of repairing software iSCSI, and restoring and rebuilding by vSwitch (in addition to checking the integrity of other vSwitches), it dawned on me that I could perhaps just brows through my LUNs and re-add orphaned VMs to the appropriate host.  Well, some of the vmx files were locked, and it was fun getting those unlocked.  Finally, after all files were unlocked, I successfully added one of my VMs to the inventory...

... only, it stated the VM was invalid.  Damn it!  So close.  I verified all critical VMs were up and running, then I went home to see my wife and daughter, ate a very belated dinner, and then went to sleep.

This afternoon, it occurred to me that the object of my quest resided in vmware.log.  On a hunch, I browsed to the datastore of the VM, pulled down a copy of vmware.log, and I was not surprised to find the log contained information about the last running ESXi host, and the details of the vmx (in case it had to be rebuilt).  I then attempted to re-add the VM back to the proper host, and vCenter indicated that I had indeed placed the VM on the right host.

As a side effect, I learned a few things about my environment that I need to adjust, and ways I can improve my monitoring and my infrastructure, not the least of which is scheduling some time to upgrade ESXi and vCenter.  Now that the "vRAM Tax" has been eliminated in vSphere 5.1, I am seriously considering renewing my service contract instead of picking a new hypervisor platform.

Wednesday, July 4, 2012

EvolveBDR v0.0.2 - a Bacula OVF

EvolveBDR v0.0.2 is a portable OVF format Virtual Machine pre-configured to run Bacula as easily as possible.  EvolveBDR aims to supply sane default configurations to allow SysAdmins to rapidly deploy EvolveBDR in their environments while greatly minimizing the learning curve necessary to successfully implement Bacula as a file level backup solution.

I spend a large portion of my time on the internet haunting /r/SysAdmin on Reddit.  A common question on that venerable forum often contains something similar to "Can you please recommend a low cost/FOSS enterprise backup solution?"  I and many others are quick to recommend Bacula, BackupPC, Areca, Cobian Backup, and Veeam's free edition.  Personally, I find myself recommending Bacula more often than not; a proven solution that has been in development for the better part of a decade; if not longer.

The power of Bacula comes with a small price: a learning curve that is often difficult for the novice *NIX administrator; especially for the Jr. Admin who finds themselves in a primarily Windows-based environment.  (This is not a judgement against their skill-sets or a denigration of their worth as a SysAdmin.)  The creation of EvolveBDR is a response to this challenge with a OVF format Virtual Machine which comes pre-configured to use Bacula as a file based backup solution using hard disks as the primary archival medium.

Some features you may find interesting:
  • The virtual appliance has been built on top of TurnKey Core v12.0 RC.
  • The file system lives on top of LVM for rapid provisioning of more disk space without rebooting the VM.
  • CurlFtpFs to make remote backups easier.
  • A default volume pool and Windows-friendly FileSets (one at the moment for Win2k3; Win2008 on the way) to help Windows SysAdmins spend less time configuring Bacula and more time implementing EvolveBDR/Bacula in their environment.
It should be noted that EvolveBDR is still being developed and tested.  If you insist on using this in production, you do so at your own risk.  It is also recommended that you change all of the currently configured passwords to something a little more secure than 'bacula'.  ;)

Tuesday, July 3, 2012

Admins locked out by Group Policy? PsExec to the rescue!

Have you ever accidentally used Group Policy to prevent yourself from being able to edit Group Policy?  You wouldn't be the first SysAdmin - it can happen to even the most seasoned of us; especially in the middle of the night during a particularly long maintenance window...

Here is the scenario:

A single Windows Server 2008 R2 machine running AD and RDS (clearly in need of a couple of Domain Controllers and the transfer of FSMO roles away from the Remote Desktop Session Host).  Fortunately, AD Recycle Bin is enabled.  A young SysAdmin decides to modify Group Policy on this time bomb without peer review or change management.  Fortunately, they didn't edit the default Domain Policy, but they did apply a policy to all Users that prevented access to the Control Panel and explicitly forbid the execution of the MSCs from Start --> Run.  Hell, he even disabled the execution of MMC, RegEdit, and CMD.EXE...!

Wouldn't you know it, he even had the courtesy to run a 'gpupdate /force'...

It is at this time that many SysAdmins would try to do things like edit the Registry, delete entries from SYSVOL, and other nefarious items that are sure to make your day a whole hell of a lot worse.

(Had the Jr. Admin only edited the default Domain Policy, we could perhaps use Dcgpofix to restore the default Domain Group Policy for our Disaster Recovery.)

Fortunately, TaskMgr was still accessible, and PsExec was installed on the host to fix a previous issue with a legacy application that wasn't multi-thread aware.  (We modified the default shortcut for the application and added PsExec to set set the affinity of the application to a single core - works like a charm!)  How did I fix the issue?  I did the following:

Start --> Run --> taskmgr followed by File --> New Task (Run...) and 'psexec cmd.exe' with the task created with administrator privileges.  This allowed me to spawn a shell.  Now, I thought I was going to have to use REG from the command line to query and delete Registry keys, but my intuition told me to try running MMC for the hell of it...

And it launched.

Now, the real test is whether or not I can add the GPO snap-ins and edit the policy object...

Yes.

I then used the Group Policy Editor MMC snap-in to disable the GPO followed by another 'gpupdate /force' from my PsExec-spawned shell.

I saved the day, and now I have convinced the client to spin up a few Domain Controllers.

As for the Jr.?  There never was a Jr. SysAdmin - it was me the entire time; or, at least I think it was me.  An unnamed party also has Domain Admin rights, but they swear they didn't cause this blunder.

Maybe I will dig through the logs, for fun and profit, just to make sure...

Wednesday, June 13, 2012

Let IT Help You

Project management can be a difficult process; especially if there is a breakdown in communication between any of the stake holders or resources.  Are you migrating your infrastructure?  Don't wait to tell your end users until the morning everything is cut over into production.  Are you building a website for a client, and the site needs to go live?  Don't wait until two hours before the deadline to tell your IT staff that you need DNS changes and web server resources for production.

Competent Systems and/or Network Administrators know how to work with the limitations of the technologies crucial to the success of your project, and they should be consulted on every project that utilizes or consumes any IT assets that you expect to be provisioned in time to meet your deadline.

Yes, we love our ticket systems, and no, you are not exempt from following our documented policies and procedures.

Why?  Why must you email me a ticket, or fill out a provisioning request form online at the company intranet to have resources allocated to your needs?  Like you, we Admins have to manage our time in the most effective way possible, and we value consistent results.  We want to prioritize and (most importantly) remember your request.

It can be difficult for an organization to get used to working with a competent SysAdmin - there will be growing pains for all involved.  Disorganization and crisis are not necessarily indicative of hard work, and if you must work hard, at least work smarter.

Keep us in the loop.  Is your project approved?  At the very least, Cc our ticketing system.  This one simple little act will save you many hours of frustration.  Get into the habit of sending your SysAdmins a ticket for every request upon their time.

You may look at us as a cost center.  Perhaps this is because you won't let IT help you make business decisions.  Of course, such a relationship requires a large amount of trust, and when that trust exists, we can show you how wonderful technology can be in the corporate environment.

Work against IT, and you will constantly be frustrated when we can't magically circumvent technical limitations because you're about to be late and over budget on your project.

Work with IT, and let us help you remove so much unnecessary stress from your daily operations.

Technology can be fun, but you have to work by our rules so that we can implement industry best pracices.  Of course, you could also play by the rules as well and have a little fun with technology - the choice is yours.