Thursday, January 27, 2011

Fix a tomcat6 error message "/bin/bash already running" when starting tomcat?

I have a Ubuntu 10.04 machine that has tomcat6 on it. When I start tomcat6 with /etc/init.d/tomcat6 start I get

* Starting Tomcat servlet engine tomcat6
/bin/bash already running.

and the server fails to start. Unfortunately, there is nothing in /var/log/tomcat/catalina.out to help debug the issue. With some cleverly placed echo statements it seems to be the line from /etc/init.d/tomcat6:

start-stop-daemon --start -u "$TOMCAT6_USER" -g "$TOMCAT6_GROUP" \
                -c "$TOMCAT6_USER" -d "$CATALINA_TMPDIR" \
                -x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH"

The only thing I've changed in this script is TOMCAT6_USER=root. In servers.xml, the only thing I've changed is <Connector port="80" protocol="HTTP/1.1" from port 8080. I have tried reinstalling the package by first removing everything sudo apt-get --purge remove tomacat6 and then sudo apt-get install tomcat6 but this has not solved the issue. I have also restarted the server multiple times in hopes of some magic. Everything was working until I restarted my server. Any ideas?

  • Hi,

    Looking at the man page for start-stop-daemon, it looks for processes which match the name, uid, and/or gid of the command it's being asked to start. From the error message, I'd guess it may be doing this based on the /bin/bash command - so it's finding that there's already a root process running the /bin/bash command, and refusing to start a "duplicate" one.

    You could work around this by hacking the init script around. But running Tomcat as root is Bad, so better to look at other ways to send port 80 to Tomcat, even while Tomcat runs as a non-root user. The most common approach is to run Apache httpd in front, another (if you don't want to fiddle with connectors) is to use iptables to map port 80.

    See the serverfault question for details on how to do these.

    From Kief
  • Sometimes you gotta do what you gotta do.

    Here's a patch that makes running tomcat as root work:

    --- init.d.old/tomcat6  2010-09-01 15:31:01.996208252 -0700
    +++ init.d/tomcat6  2010-09-01 15:30:10.315146226 -0700
    @@ -1,4 +1,4 @@
    -#!/bin/sh
    +#!/bin/sh -x
     #
     # /etc/init.d/tomcat6 -- startup script for the Tomcat 6 servlet engine
     #
    @@ -141,6 +141,12 @@
            cd \"$CATALINA_BASE\"; \
            \"$CATALINA_SH\" $@"
    
    +   cat >/etc/init.d/tomcat_exec.sh <<-EOT
    +   #!/bin/bash
    +   $TOMCAT_SH
    +   EOT
    +   chmod +x /etc/init.d/tomcat_exec.sh 
    +
        if [ "$AUTHBIND" = "yes" -a "$1" = "start" ]; then
            TOMCAT_SH="'$TOMCAT_SH'"
        fi
    @@ -151,7 +157,7 @@
        chown $TOMCAT6_USER "$CATALINA_PID" "$CATALINA_BASE"/logs/catalina.out
        start-stop-daemon --start -u "$TOMCAT6_USER" -g "$TOMCAT6_GROUP" \
            -c "$TOMCAT6_USER" -d "$CATALINA_TMPDIR" \
    -       -x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH"
    +       -x /etc/init.d/tomcat_exec.sh 
        status="$?"
        set +a -e
        return $status
    
    From
  • There's an Ubuntu bug for this issue, with a proposed patch.

    It's not necessarily anything to do with running as root - if your tomcat6 user has a /bin/bash process (say, you're using it to run some commands to support your Tomcat application), then you will hit it as well.

    From crb

Tracking Security Vulnerability remediation

I've been looking into this for a little while, but havn't really found anything suitable.

What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT"

What I am looking for is something pretty simple that allows the following:

  • batch entry of new vulnerabilities that need to be remediated
  • Per user assignment
  • AD/LDAP Authentiation
  • Simple interface to track progress - research, change control status, remediated, etc.
  • Historical search ability
  • Ability to divide by division
  • Ability to store proof of resolution for the Security Team to access
  • Dependency tracking
  • Linux based is best (that's my group :) )
  • Free is good, but cost doesn't matter so much if the system is worth it

The systems doesn't have to have all of these features, but if it did that would be great.

yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group.

Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for.

Server Faults input is greatly appreciated as always!

  • This is not an answerable question, but a discussion opener and rather belongs to a forum...

    From Craig
  • Ok, as far as I know, there is no product that will do this; would have to roll your own.

    As far as starting points, I would start with Metasploit and nmap to gather your vulns, drop them into a db (mysql, postgres, etc.), and use that input as creation items for a bug-tracker (Trac, Redmine, etc) and use that as your ticketing engine.

    As far as getting your AD/LDAP authentication records, you could probably do that input with syslog collection; I'm not sure if you could collect directly from there into your db.

    I won't go as far as to say that 'if you productized this, you'd get rich', but with the right SEO, you could certainly get a lot of pageviews and/or consulting offers.

    In any case, I hope it's worth it to you, because it's going to be a lot of work! ;-)

    UPDATE: Have you looked into Metasploit?

    From gWaldo

Setting up two NICs on two separate LANs provides error

I wish I had found this before starting in, but it's too late for that...


I am running CentOS 5.5.

I started following this guide for setting up two NICs on different networks. Everything was going fine until I hit this step:

This part allow the routing to the direct neigbor over the good interface :

ip route add 10.2.0.0 dev eth0 src 10.2.0.1
ip route add 10.1.0.0 dev eth1 src 10.1.0.1

I am hit with the following error when I try to do those commands:

RTNETLINK answers: Invalid argument

I am not very experienced in server setup. However, I have been tasked to do this, so I look for help. Any suggestions on where to go from here?

Alternatively, any suggestions on how to undo what I have done so far, in order to give the other guide listed above a try?


Edit: I forgot to mention, this server also has the tool Webmin installed, if that should help any.

  • Do your ethernet devices have such IPs? I.e. does eth0 have 10.2.0.1 and eth1 10.1.0.1? Or better - couldn't that be a typo, so that eth0 must have 10.1.0.1 and eth1 10.2.0.1?

    Aeo : I don't really know if it's a typo or not. I know just enough about networking to adapt the schema to fit our network setup. Beyond that, it's all educated guessing. Feel free to correct me if I'm wrong, but would not switching those two IPs actually confuse the two separated sides?
    From lorenzog
  • That guide is convoluted and not the standard way to do it in CentOS.

    You aren't specifying the netmask so I guess it is assuming a class A based on the IP address, in which case they are on the same subnet and the second route command would replace the first. Run ip addr ls, ifconfig, route -n, or netstat -rn to see.

    Now, I'm not sure what you are trying to do but it is best to take it in steps. First, you configure the interfaces then you add your routing. The CentOS method for configuring the interfaces is to edit /etc/sysconfig/network-scripts/ifcfg-eth0 and ifcfg-eth1. You configure your default route in /etc/sysconfig/network. You configure additional routes in /etc/sysconfig/network-scripts/route-eth0 and route-eth1.

    Here are my assumptions. Change to match your setup. The IP addrs of your CentOS box are 10.2.0.20 for eth0 and 10.1.0.20 for eth1. The netmask for both is 255.255.255.0. The gateway for eth0 is 10.2.0.1 and the gateway for eth1 is 10.1.0.1. You want all traffic to go through eth0 except 10.1.0.0/24 and 10.3.0.0/24 which go through eth1.

    In ifcfg-eth0 you have:

    DEVICE=eth0
    IPADDR=10.2.0.20
    NETMASK=255.255.255.0
    BOOTPROTO=static
    ONBOOT=yes
    

    In ifcfg-eth1 you have:

    DEVICE=eth1
    IPADDR=10.1.0.20
    NETMASK=255.255.255.0
    BOOTPROTO=static
    ONBOOT=yes
    

    In /etc/sysconfig/network you have:

    NETWORKING=yes
    HOSTNAME=whatever
    GATEWAY=10.2.0.1
    

    In /etc/sysconfig/network-scripts/route-eth1 you have:

    10.3.0.0/24 via 10.1.0.1
    
    Aeo : I did as stated and it seems to be working flawlessly. Thank you! I've spent two days puzzling over this, so seeing it finally working is wonderful.
    From embobo

Paying for cloud server when its powered off ?

Dear All,

I am intending to use cloud computer for my test lab. As this will required 4-5 hrs of daily testing, I am looking for a cloud server provider charges at reduced rate (or not at all) if my machines are off.

Can anyone please advice me few names ?

Thanks in advance. Cheers

  • With Amazon EC2, all you'd need to pay for when the server is off is the S3 storage fees, which are quite reasonable.

    It looks like Rackspace offers similar options with their Cloud Servers and Cloud Files products.

    From ErikA

how to redirect one domain to another domain in DNS ?

How do i redirect one domain (www.abc.in) to another domain (xyz.com.) in DNS Server setup in forward Zone file ? thanks !!

  • you cannot redirect domain names to another, but you can create multiple names pointing to the same ip.

    Btw: you should improve your accept rate

    From krissi
  • To clarify Krissi's post, a redirect is not an operation that can be accomplished in DNS, however you can create a CNAME record that will resolve one domain to the IP address of another. If the target is using a virtual host configuration with their webserver, this may not accomplish what you want (target may not be accessible, may show another website, etc). What you're talking about is an operation in HTTP, which is part of a webserver's configuration.

    From brent

Trustees Being Ignored on NSS Volume

I'm trying to figure out what's going on here. I migrated a server from NW6.5SP8 to OES2 Linux. All the trustees on the NSS volumes came over and at some point in the past week, they started being ignored. It's almost as if all the users have supervisor rights from the top down. I don't see this set anywhere. What's going on?

Tom

  • Holy cow... someone had given the Organization supervisor rights to the TREE :O

    sysadmin1138 : *headdesk* Yowza. Glad you caught that!
    geoffc : That happens. Ah well. The silliness that can ensue with trustees!
    Tom : Yeah... I wish we had nSure audit running, I want to know who put it there!
    From Tom

Win XP error 0x80041003 using GetObject/winmgmts

My computer is called "neil" and I want to set some values using WMI in vbScript. I adapetd the script below from one supplied by Microsoft. When I run it in my browser I get

Error Type: (0x80041003) /dressage/30/pdf2.asp, line 8

I suspect it is some registry/security setting.

Any advice?

John Lewis

FULL SCRIPT

call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii")

Sub SetPDFFile(strPDFFile)
    Const HKEY_LOCAL_MACHINE = &H80000002
    strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF"
    strComputer = "."
    Set objReg=GetObject( _
        "winmgmts:{impersonationLevel=impersonate}!\\" & _ 
        strComputer & "\root\default:StdRegProv")
    strValueName = "PDFFileName"
    objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_
        strKeyPath,strValueName,strPDFFile
End Sub

Sub Print_HTML_Page(strPathToPage, strPDFFile)
      SetPDFFile( strPDFFile )
      Set objIE = CreateObject("InternetExplorer.Application")
      'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5
      On Error Resume Next
      strPrintStatus = objIE.QueryStatusWB(6)
      If Err.Number  0 Then
            MsgBox "Cannot find a printer. Operation aborted."
            objIE.Quit
            Set objIE = Nothing
            Exit Sub
      End If

      With objIE
      .visible=0
      .left=200
      .top=200
      .height=400
      .width=400
      .menubar=0
      .toolbar=1
      .statusBar=0
      .navigate strPathToPage
      End With

      'Wait until IE has finished loading
      Do while objIE.busy
            WScript.Sleep 100
      Loop

      On Error Goto 0
      objIE.ExecWB 6,2
      'Wait until IE has finished printing
      WScript.Sleep 2000
      objIE.Quit

      Set objIE = Nothing
End Sub
  • There's a posting on PC Review entitled WMI damaged/error 0x80041003

    One of the answers suggests this Microsoft support post which while it concerns Vista does has this listed as the cause of the problem:

    This problem occurs if the WMI filter is accessed without sufficient permission.

    So check your permissions and check the solution offered by Microsoft.

    From ChrisF
  • First, you need to be running this script under an account that has admin rights on each computer. That could well cause your WMI Provider issue. This is what I think your issue is.

    Are you trying to run this as part of an asp webpage, or as a standalone script. Your deployment details would be helpful.

    Finally, so you realize that you're setting strComputer to the local machine that this is running on? Depending on how this is being deployed, it may not do anything.

    Also, it looks like you're passing a file name of "ascii". That doesn't look right given that you're trying to set the name of a PDF file.

    Finally, in VBScript, the line breaks matter. This is how it should look:

    call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii")
    
    Sub SetPDFFile(strPDFFile)
        Const HKEY_LOCAL_MACHINE = &H80000002
        strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF"
        strComputer = "."
        Set objReg=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\" & _
            strComputer & "\root\default:StdRegProv") 
    
        strValueName = "PDFFileName"
        objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_
            strKeyPath,strValueName,strPDFFile
    End Sub
    
    Sub Print_HTML_Page(strPathToPage, strPDFFile) 
        SetPDFFile( strPDFFile ) 
        Set objIE = CreateObject("InternetExplorer.Application") 'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5 On Error Resume Next strPrintStatus = objIE.QueryStatusWB(6) If Err.Number <> 0 Then MsgBox "Cannot find a printer. Operation aborted." objIE.Quit Set objIE = Nothing Exit Sub End If
    
        With objIE
        .visible=0
        .left=200
        .top=200
        .height=400
        .width=400
        .menubar=0
        .toolbar=1
        .statusBar=0
        .navigate strPathToPage
        End With
    
        'Wait until IE has finished loading
        Do while objIE.busy
            WScript.Sleep 100
        Loop
    
        On Error Goto 0
        objIE.ExecWB 6,2
        'Wait until IE has finished printing
        WScript.Sleep 2000
        objIE.Quit
    
        Set objIE = Nothing
    
    End Sub
    
    From gWaldo
  • Thanks for your reply. The line breaks seem to have been introduced in the process of paasting into this form.

    Well spotted - I was using a PDF file name "ascii". I added a .pdf extension but still get the error. I suspect you're right that it's to do with admin rights. Here's more about the setup and what I'm trying to achieve.

    Win2pdf is a product for writing PDFs by works by simulating a Windows printer. You "print" the page, select win2pdf in the print dialog and it then asks for a file name. I have it installed on my pc (called Neil) and it works fine in this conventional way.

    My aim is to write an html page to a PDF file using win2pdf - but via ASP/vbscript/javascript rather than with manual intervention. The script for doing this was provided by win2PDF's tech support but when it did not work, that was the limit of their understanding.

    In the sample script the file ascii.asp just produces a table of ascii codes/characters. The URL given is on my own PC which has IIS set up to run scripts which it does fine. The error I get occurs on about the fourth line executed.

    I am logged in with full admin rights - I think! But I'm no expert. I hope this helps to give some more specific suggestions about how to check/fix the admin rights.

    From John Lewis

How do I set up an installation script including ODBC-connection?

Hi!

Easy question this time.... I have a installation file, and a registry edit in my script.

How do I set up a ODBC-connection as well, actually I need to script up 2 ODBC-connections.

Any advice for me?

  • System ODBC settings are stored in the registry under:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ODBC

    I would suggest configuring the ODBC connections manually then exporting the registry values. You can then either script a registry merge or write the values explicitly. Remember to install the relevant drivers and associated ODBC registry settings for those as well.

    Maclovin : Thanks for the reply.Does anyone have any examples of doing it?
    From

How to make a clone of a HD using RHEL 5.4?

We're using RHEL 5.4 and need to clone some hard drives. What would be a good (or "correct") way to do this? I'd like to avoid using dd if I can, since that's somewhat slow. (If that's the only choice, however, then so be it.)

A few caveats:

1) Using other distros is not possible - inlcuding live CDs - as we have a very strict approval process and the only distro we can use is RHEL.

2) If at all possible we need to use software that's part of the RHEL packages. Recommendations of other software are still appreciated, but if we can use something already part of RHEL it would save us a lot of paperwork.

I realize this seems like I'm trying to make the job harder than it should be, but such is the nature of corporate regulations.

Thanks for any help!

  • Well, if you're avoiding using 'dd' and any other non-RHEL provided tool, then you're stuck with duplicating file systems and copying over the contents (using your tool of choice, cpio, tar, rsync, etc) and putting GRUB on the new drive(s). This would best be done with quiesced drives (booted into RHEL rescue mode, perhaps).

    From TCampbell
  • How exact a clone do you need? If there's LVM, for instance, do the IDs need to match? ext2 (and 3, and probably really most Linux filesystems) has a Universally Unique Identifier (UUID) on each filesystem; do those need to match between original and clone?

    In other words, what do you need the clone for?

    There's nothing faster than dd for making a true exact clone of a drive.

    Some dd alternatives

    1. Use fdisk (or parted or cfdisk or whatever) to duplicate the partitioning. lv tools if necessary. Make filesystems, use rsync or (cd /origmount ; tar cf) | (cd /newmount; tar xf -) or cp to copy the data -- This will lay out the files in a completely different arrangement of blocks, but look identical. Or don't mount the original filesystems and use something like dump -level0 -f - /dev/sda1 | (cd /mnt/sdb1; restore -f - -rf)
    2. Carefully use software RAID tools (md) to create a degraded RAID1 out of the original partitions, add the new partitions, wait for sync to finish and then break the RAID. This will probably be slower than dd, but most of the work can be done while the machine is running so it may be "faster" for certain definitions of that word.
    From freiheit
  • What's the purpose of cloning the drive? To quickly install multiple systems? Then create a kickstart file from the server to be cloned and use that to install the other machines.

    From John
  • These are steps I recorded for moving from a large LVM managed disk with a CentOS 5.5 install to a smaller disk (obviously the used space on the large disk was less than the size of the smaller disk). I'm sure there are better ways to do this, but this method was successful. Some steps may be specific to our situation, tweak as necessary.

    Reqs:

    • Install CD
    • New disk

    Steps:

    • Connect the new disk

    • Boot with the CD, at the prompt type "linux rescue" to get into rescue mode. It will ask whether you want to continue or skip the mount, you should continue (it will mount to /mnt/sysimage). Do not format the new disk if it asks.

    • Check previous geometry with fdisk -l. You'll likely have /dev/sda1 and /dev/sda2 in a regular LVM configuration. sda1 will be your /boot partition which exists outside of the LVM. Its size should be 1-13 with the rest of the disk being dedicated to the LVM.

    .

    # fdisk -l
    ...
    /dev/sda1 * 1 13 ... 83 Linux
    /dev/sda2  14 ...    8e Linux LVM
    # fdisk /dev/sdb
    >Command...:
    n
    >Command action
    >e extended
    >p primary partition (1-4)
    p
    >Partition number (1-4):
    1
    >First cylinder ...:
    <default>
    >Last cylinder ...:
    13 (value from /dev/sda1, the original /boot)
    >Command...:
    n
    >Command action
    >e extended
    >p primary partition (1-4)
    p
    >Partition number (1-4):
    2
    >First cylinder ...:
    <default>
    >Last cylinder ...:
    <default (end of disk)>
    >Command...:
    t
    >Partition...:
    1
    >Hex code...:
    83
    >Command...:
    t
    >Partition...:
    2
    >Hex code...:
    8e
    >Command...:
    a
    >Partition...:
    1
    >Command...:
    w
    
    • Create file system for /boot on /dev/sdb1

    .

    # mkfs.ext3 /dev/sdb1
    
    • Setup new Physical Volume, Volume Group, and Logical Volumes and their filesystems on /dev/sdb2. Replace ?G with the size you want. LogVol00 should be LVM partition size minus your required swap volume size, LogVol01 should be your swap size.

    .

    # pvcreate /dev/sdb2
    # vgcreate VolGroup01 /dev/sdb2
    # lvcreate --name LogVol00 --size ?G VolGroup01
    # lvcreate --name LogVol01 --size ?G VolGroup01
    # mkfs.ext3 /dev/VolGroup01/LogVol00
    # mkswap /dev/VolGroup01/LogVol01
    
    • Mount the new disk and copy contents from the old disk to it with cp -ax. Avoid copying /dev, /proc, /sys, /boot, /lost+found and /mnt

    .

    # mkdir /mnt/newdisk
    # mount /dev/VolGroup01/LogVol00 /mnt/newdisk
    # cd /mnt/sysimage
    # for i in $(ls -1 | grep -v '\(dev\|proc\|sys\|mnt\|boot\|lost\)'); do echo $i; cp -ax /mnt/sysimage/$i /mnt/newdisk; done
    # cd /mnt/newdisk
    # mkdir {dev,proc,sys,mnt,boot}
    
    • Mount the new /boot and copy contents from the old disk to it, then unmount it

    .

    # mkdir /mnt/{boot,newboot}
    # mount /dev/sda1 /mnt/boot
    # mount /dev/sdb1 /mnt/newboot
    # cp -ax /mnt/boot/* /mnt/newboot
    # umount /mnt/newboot
    
    • Install grub to the new disk

    .

    # mount -o bind /dev /mnt/newdisk/dev
    # mount /dev/sdb1 /mnt/newdisk/boot
    # chroot /mnt/newdisk
    # grub
    > root (hd1,0)
    > setup (hd1)
    > quit
    
    • Fix your /boot/grub/grub.conf

    .

    # vi /boot/grub/grub.conf
    :%s/VolGroup00/VolGroup01/g
    :wq
    
    • Redo your initrds

    .

    # cd /boot
    # for i in $(ls -1 initrd* | grep -v bak); do mv $i{,-bak}; ver=$(echo $i | sed 's/initrd-//;s/\.img//;'); mkinitrd /boot/$i $ver; done
    
    • Exit from the chroot

    .

    # exit
    #
    
    • Fix your /etc/fstab

    .

    # vi /mnt/newdisk/etc/fstab
    :%s/VolGroup00/VolGroup01/g
    :wq
    
    • At this point, shut down and remove the old disk. Boot again into rescue mode. /dev/sdb will now be /dev/sda and mount to /mnt/sysimage

    • Label /boot

    .

    # e2label /dev/sda1 /boot
    
    • Remove the CD, and you should be able to boot into the resized disk at this point.
    From brent
  • For imaging copies of your disks, you could try Ghost, Fog, Clonezilla, etc. (Even VMware Converter, etc.).

    For a filesystem copy, I'd recommend rsync and the like.

    From gWaldo

Are there complete System Center Configuration Manager alternatives?

I have been researching System Center Configuration Manager 2007 R2 SP2 and struggling to swallow the SCCM pill. The tool seems outdated and not well equipped for managing an environment with the latest versions of Server 2008 R2, SQL 2008 R2, Exchange 2010, SharePoint 2010, etc.

I need software that will help me:

  • Deploy new software
  • Monitor what software is deployed
  • Deploy updates
  • Install new server and client OSes with specific configurations
  • Monitor drive space and be able to send out alerts
  • Aggregate event logs so that there is one central place for monitoring the health of an organization
  • Be configurable via script
  • Have a good DR plan so that all the effort poured into setting everything up is not at risk

These features line up well with the advertised features of System Center Configuration Manager and possibly Operations Manager as well but both of these pieces of software feel old and out of date.

Do I have any real alternatives in this market space?

I have seen Nagios and Zenoss but we are a Windows shop and adding the maintenance and management of a linux server for this purpose is probably more work than dealing with the quirks of SCCM.

  • Other than the patch management pieces it sounds like you need to add SCOM to your environment

    From Jim B

IIS6 cannot accept SSL connection

Hi

I have a server on a network I used SelfSSL to generate the certificate when I tick "Require Secure communication" option in a website security tab, the web site cannot be opened anymore.

using https protocol, I got 403 error

Thanks

Would you like to enter a security context?

When I login , it asks me "Would you like to enter a security context?"

I have SELinux enabled ..I'm using Fedora 12. How to resolve this?

  • Unless you have a specific need for SELinux I would just disable it, as it makes troubleshooting things more complicated.

    1) Boot into single user mode: When you first start your computer, the GRUB screen (where you choose your Operating System) appears. Select the Fedora that you want to boot into, but press the a key instead of pressing Enter. append 'single' to the arguments there.

    2) Then set SELINUX=disabled in /etc/selinux/config

    3) reboot

    Tom O'Connor : I tend to set it to permissive, rather than disabled, because at least that way it logs stuff that has a different SELinux configuration to what is expected.
    From mark
  • You could try something similar to this bug report by adding a < /dev/null

  • Thanks everybody, the issue is with working of Pam.d with selinux enabled. I turned off pam.d features .its working now. though i need to re-look into /etc/pam.d/login instead of disabling it complete.

    I have two choices - 1.disble selinux or 2.disable pam.d for login server. I chose 2.

Looking for a NTP Server Software for Windows

I'm looking for a preferably free NTP Server for Windows Server 2003/2008. We have already tried the built in Windows Time Server, but our tests did show that it is not very accurate, we see time differences up to 500ms. The max time difference we can allow for our application is ~100ms.

Now we have already used the Meinberg NTPd for Windows (http://www.meinberg.de/english/sw/index.htm). It works great except we have one big issue with it: If there is a network connection problem betweek client and server, the ntp server is in a panic state, it won't give the client a new time until we restart the ntp service. This is a big issue which has caused us some trouble. It was working fine for months until there was a network problem we didn't notice, we only noticed it after a week when the time difference was already 30 sec. on the clients.

So please suggest some alternative NTP Server for windows. I did Google but I get alot of unrelated search results.

Thanks in advance!

Edit: So far the ntpd windows version was very accurate and I'd like to stick with it. The only problem is the "panic state" after a network disconnect. Maybe some knows here what the cause of this is and how to fix it. Also, I forgot to mention that we have a server/client setup like this:

Server1 --> Server2 --> Server3 --> Client1 --> Client2 --> Client3

So Server2 gets its time from Server1, Server3 gets its time from Server2, and the Clients get their time from Server3. Also, there are clients connected directly to Server2. It is important that all Servers and Clients have the exact same time (within ~100ms)

Now there was a network problem with Server3 and its clients. The servers run the ntpd port for Windows, which acts as NTP server and client. The clients have Dimension4 as NTP client. After the network problem, the error message in D4 was something like this (out the top of my head, don't have the exact error message):

Server response: The server is in a panic state (could not sync clock)

I read through the ntpd docs, and the only mention of "panic" is when the time difference is 10000 seconds which will cause to exit the ntpd server but this was not the case. Also there is a "-g" command line switch to disable the panic exit, but it is already set by default.

Any ideas what could cause the panic state and how to get rid of it next time?

  • I am using NetTime, from many years, both as client and server.

    This software is primary a NTP client, but it works well also as NTP server within a LAN (option: allow other computer to sync to this computer).

    Simon : Thanks alot! I'm going to try it and report back
    Simon : I did try it in our test enviroment, and I have 2 problems with NetTime: The accuracy is similar to the windows w32time service, so I get difference of up to 500 ms, secondly, we use Dimension4 on our clients to get the time on the clients, and they fail to sync with NetTime because if an "invalid stratum", from what I read, NetTime registers itself with a stratum of 16
    lg : Sorry, I never needed this accurancy, so I didn' know these problems.
    From lg

Problems with SCP stalling during file copy over VPN

I have a series of files I need to copy via SCP over a VPN to a remote linux server each night. The files are not large, we're talking about tens of megabytes here, but the file copy almost always stalls after a few seconds. Running the SCP command with -vvv, I see the following over and over throughout the attempted copy process:

debug2: channel 0: rcvd adjust 131072
debug2: channel 0: rcvd adjust 131072
debug2: channel 0: rcvd adjust 131072

Any thoughts? I see this question being asked in various places out there, but never any answers. Any help would be appreciated.

  • Are you running the latest version of whatever ssh servers and clients you're using? I'd also recommend hitting their email lists on this as it seems rather obscure.

    From Mark C
  • We had similar spurios problems with scp to some Linux servers (Debian, 2.6.24-etchnhalf).

    We were able to do away with the stalls by disabling the TCP variable tcp_sack ("tcp selective acknowledgements") on the remote servers:

    sysctl -w net.ipv4.tcp_sack=0

    On Debian, tcp_sack is enabled by default. If I read http://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html, it should make no sense to disable this option, but in our case, it helped.

    You can make this change permanent by adding a line net.ipv4.tcp_sack=0 to /etc/sysctl.conf (on other Linux systems YMMV).

    From flight
  • Are you allowing ICMP through the VPN? "TCP connection stalls after a few seconds" often translates to "PMTU black hole".

Apache 2.2 + PHP 5.3.2 + cURL not working

When I try to start the Apache server (with PHP and cURL extension), it says:

The Apache2.2 service is restarting.
Starting the Apache2.2 service
The Apache2.2 service is running.
rmine the server's fully qualified domain name, using 192.168.1.8 for ServerName
[Tue Sep 07 14:30:57 2010] [warn] pid file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?

(I'm guessing that the fourth line should start something line "Determine"; probably a race condition.)

Then it terminates (and if I retry I get the same "unclean" message). If I comment out the extension=php_curl.dll in php.ini, httpd.exe works again. The PHP error log is empty.

When I run php-cli, cURL functions normally, so it must be caused by some magic behind the scenes that I don't know anything about.

Suggestions? Can I retrieve more information some way? Thanks!

Edit: Apache + PHP works nicely without cURL.

  • Looks like non-production server to me. You can install XAMPP or WAMPP, it has all the extensions already functioning out of the box. No need to waste time with server config.

    [XAMPP] - http://www.apachefriends.org/en/xampp-windows.html

    [WAMPP] - http://www.wampserver.com/en/

    From Alex
  • You might be using the wrong PHP version :

    Which version do I choose?

    If you are using PHP with Apache 1 or Apache2 from apache.org you need to use the VC6 versions of PHP

    So first make sure you're using VC6 (Thread safe) and check if that solves the problem, or alternatively remove your current apache, php and mysql and install Zend Server CE.

    Jonas Byström : I was already using VC6 TS for Apache 2.2 plug-in, but upgraded to PHP 5.3.3 and that did the trick for me. Thx!
    From wimvds

How do I populate my ProLiant DL320 G6 server with memory?

I have problem with a new HP ProLiant DL320 G6 server. From the beginning it had to less memory, and now I have ordered more memory two times. Now I am struggling to populate the server with these memories.

I have two memories: HP 4GB 2Rx4 PC3-10600R-9 Kit

How should I populate them on the motherboard?

I have tried to understand the documentations, but it's very unclear for me. I have populated slot 1 and 4, but I can not start the server with that configuration.

Experiences in Upgrading from Exchange 2003 to Exchange 2010

I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone.

Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade.

Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...?

(I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.)

UPDATE A couple items of note:

-We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices.

-We do have a small number of Blackberries in the environment (< 10%).

-In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

  • The is really complete. Follow it to a tee. It goes over practically everything regarding getting your 2010 setup going. There's only 4 caveats that I feel are worth mentioning.

    First, read up on routing group connectors. The install takes care of creating the first connector, but if you have multiple 2010 servers, you'll need to create additional connectors. Also, don't forget to create receive connectors for your old 2003 setup. You want mail to flow properly while you're in a migration. Surprisingly, I don't remember the docs talking much (or at all) about this.

    Second, your legacy OWA might break. This is a situation I ran into during my first migration. You can see a discussion about it on Technet here. Basically, when a 2003 user logs onto a 2010 OWA server, they should be forwarded to a 2003 front end server. This didn't happan all so well. Setting the legacyredirecttype to manual fixed the problem, although making it a little more difficult for the end users to figure out what's happening.

    Third, removing your 2003 setup isn't really touched on at all in the 2010 Technet docs. You should follow the information for 2007 available here. When instructed to uninstall Exchange 2003, simply uninstall 1 cluster node at a time. When you get to the last node, let the installer know it's the last node and it will remove the server from Active Directory.

    Fourth, I don't remember the docs talking much about ActiveSync. Luckily, the Exchange team has a great blog. They definitely talked about it here.

    My first migration went really well. Moved mailboxes over the weekend, came in on Monday and nobody with Outlook noticed anything other than better performance. OWA users were a little surprised, but they learned it quickly.

    gWaldo : +1 Thanks for the input. It reminded me about a couple of pieces that need to be considered. I already knew about "You had me at EHLO", but I'm glad you pointed it out.
    From Jason Berg
  • I just completed a migration of 300 users and we ran into a serious snag. Outlook 2003 SP 3 was our primary email client. Outlook 2003 uses UDP Notifications to update it's views when in online mode. Exchange 2010 doesn't support this.

    The issue is that a user could have to wait up to 60 seconds in order to get his item deleted.

    Couple of solutions -- Upgrade to a minimum of Rollup 1 or SP1 / + add the Maximum Polling Frequency registry key set a value to 10000.

    Once done I was able to get the 60 second wait down to 7 - 12 seconds -- Not great but at least usable.

    Moving back to Exchange 2007 was not an option and we leapfrogged directly to 2010. So we were stuck.

    You can enable Cached mode -- This fixes the problem.

    But what if you have a Citrix Farm that publishes this version of outlook. Now you don't have a choice but to live with the poor performance or upgrade you client version to 2007 or up.

    From my perspective Microsoft has done a very poor job publishing this issue and it is a real show stopper for most organizations. If you can upgrade your client versions before upgrading to 2010. It will save you a lot of hassle.

    Anyways -- Hope this helps for your next Migration.

    Thanks,

    Dave Kawula Principal Consultant TriCon Technical Services Inc.

    gWaldo : Thanks for the input! That's a great (well, it sucks, especially for you having to deal with it) gotcha to know ahead of time! +1 to you.
    MakerOfThings7 : Does this polling interval also affect delegates? Access to delegate mailboxes is often in non-cached mode.

At Windows start up the Time Service takes about 30 minutes to converge

I'm looking for information on how the Windows TIme Service over a domain affects the hardware clock. The machines are clients on a Windows AD domain with the basic time services. It's not our network so I don't have a lot of detail.

1) We are doing some transaction based processing with a remote host. We normally see the a difference of about 250 ms between OS clocks included network delays. This is fine.

2) If the computer is rebooted the difference is about 10 seconds. It takes about 30 minutes to get back down to the 250 ms again. I believe this is called convergence.

I'd like some direction on where to look for information regarding:

a) Does the Windows Time service ever update the real-time hardware clock to keep it close? b) Is there a way to tell the Time service to do a faster convergence? c) I think this service is a bit different from NTP which could be configured. Right?

My searches have not addressed the RTC and ways to speed up the convergence.

  • It looks like we are going to use a the NET TIME command to sync to the system clock. Specifically NET TIME /SET /Y will be run at start up. It will set the time immediatly rather than slowly converging. The time server used is reported when running that command so we can insert it into the batch file.

    NET TIME \myTimeServer /SET /Y will go directly to the server speeding things up. As a fail safe we can check the return code in case the server is unvailable and run it with out the name as in the first example.

    If you can't directly reach the time server, you can set it to the domain PDC clock using this command. This is probably pretty good as a second choice. NET TIME /domain:myDomain /SET /Y.

    Rich Shealer : A little update for those that find this thread. We found we were getting a drift over an eight hour period of up to almost 2 seconds between clocks. This is as outlined by Microsoft that does not guarantee anything less than 2 seconds. Running the NET TIME command as above every two hours we are able to keep the machines within 800 ms.
  • Use NET TIME /SET /Y if the clock is beyond the convergence threshold (can't remember the default value). Otherwise, the Windows Time Service can be given a "poke" using w32TM /RESYNC.

    There's also some useful switches for monitoring the convergence progress: /MONITOR and /STRIPCHART.

    Make sure you have >1 reliable time sources, preferably three.

Tool for ongoing monitoring of a webserver?

I need to monitor a SaaS by going to a specific URL (is currently done either manually in a browser or by sending an HTTP request programatically in C# ) and if the response is other than HTTP response code 200, send an email and/or sms to predefined list of addresses/phone numbers.
We'd like to be able to specify the number of the retries after which no request will be send for specified amount of time, and also have the tool not send any more alerts after sending a predefined number of them.

Is there an existing tool available that can perform the above?

Thanks!

  • Nagios is my tool of choice for this sort of thing. Have a look here. It probably looks like overkill, but it certainly does exactly what you want. We have it running for about 200 devices (servers, printers, router, switches) and about 800 monitored services.

    If you have detail questions about configuration, just post them here and I'll be glad to answer them.

    Matt : +1 for Nagios. My current Nagios setup monitors 200+ hosts, 700+ services. If Nagios is more than you need, I'd bet you could script up something with wget or curl.
    brent : Another + for Nagios. You can configure people and groups of people, assign them to different monitoring alert periods (e.g., after hours support crews get alerts while 9-5ers don't). We also have it setup to send SMS warnings through a GPRS modem.
    From wolfgangsz
  • Pingdom has a low cost service providing exactly what you describe. You can monitor a single server for free. 5 servers is $9.95 per month.

    From MarshallY
  • Zabbix has a web monitoring feature that provide what you needs. Hope this helps.

    From Maxwell
  • Nagios is your best choice for onsite monitoring on Linux. For hosted ("cloud") monitoring we like AlertFox. Unlike other services, it uses real web browser for transaction monitoring. I find that very useful. It is something that Nagios can not do out of the box so it complemnts our internal Nagios setup.

    If you are on a tight budget, AlertFox has a free account option, but it is a bit hidden (link here) ;-)

    From MFauser

Possible to configure Cisco switch (IOS) via SNMP?

Is it possible to configure a Cisco switch running IOS via SNMP? I know there is a method for initiating a TFTP copy via SNMP (doc), but is there something like port level config directly from SNMP writes?

Alternatively, is there a way to initiate transferring a configuration snippet to apply, rather than replacing the entire configuration?

Let me know if you'd like anything clarified. I'm trying to avoid using Expect or anything that is not similar to accessing an API.

  • I honestly don't know of any reason you can't configure IOS via SNMP... however I would suggest NOT doing it. SNMP is very insecure. If you're not worried about security, you can simply dump your config changes into a text file & blindly replay them into a telnet session... which I would also recommend not doing.

    L.R. : It is true that SNMP v1 and v2c do not provide any serious security (anyone can read plain text packet content), but SNMPv3 is quite secure - it provides content encryption and authentication.
    brent : Exactly L.R. To TheCompWiz, I specifically wanted to avoid using something like Expect (which allows telnet scripting).
    From TheCompWiz
  • To answer my own question, it doesn't look like Cisco provides high granularity configuration via SNMP (e.g., port configuration), but it does provide a method for initiating a FTP/TFTP/SCP config copy to the switch. This copy can be performed to the running configuration which allows merging. This means a configuration snippet could be written to a text file, then TFTP'd to the switch which will merge with the running config, rather than replacing it. If copying to the start configuration, a merge operation is not done, so it replaces the entire config. An important distinction ;)

    Details here: http://www.cisco.com/en/US/tech/tk648/tk362/technologies_configuration_example09186a0080094aa6.shtml

    From brent

mysql config wordpress high loads

My query

./mysqlslap --user=root  --concurrency=50 --iterations=1 --pass=toor -vv --create-schema=db --query="SELECT SQL_CALC_FOUND_ROWS  wp_posts.* FROM wp_posts  INNER JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) INNER JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id)  WHERE 1=1  AND wp_term_taxonomy.taxonomy = 'category'  AND wp_term_taxonomy.term_id IN ('1')  AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 35, 6"

Some results

   Average number of seconds to run all queries: 68.904 seconds
    Minimum number of seconds to run all queries: 68.904 seconds
    Maximum number of seconds to run all queries: 68.904 seconds
    Number of clients running queries: 50
    Average number of queries per client: 1

Loads 32+

My machine

  • 4GB
  • E6550 @ 2.33GHz (2 core)
  • Single sata disk
  • Debian Lenny + Apache + PHP + MySQL

my sql config can be read at pastebin.ca/1934946 ; how can I adjust this? Thanks

    • Install w3tc or wp-super-cache
    • cut down on the number of posts/tags (remember that wordpress stores revisions of posts in the wp_posts table, so, if someone likes to leave themselves logged in and autosave runs every 5 minutes, you can get quite a few excess revisions in there)
    • replace wordpress
    • get a faster disk

    While I disagree with some of the tunables in the config based on gut feelings, http://blog.mysqltuner.com/ contains a script that you can run that gives you input on your config settings. You'll want to have mysql running for 48+ hours for it to give the best recommendations.

    • table_definition_cache
    • thread_concurrency (On a quadcore, I usually run this at 6)
    • join_buffer_size (you have it listed as join_buffer which may be incorrect, look at show status/show variables to make sure your config values are taking effect as you expect)

    If you aren't running email on the same partition as your database, consider mounting the filesystem with noatime,nodiratime. Check other tunable settings on your filesystem.

    From karmawhore