Friday, January 14, 2011

block level vs. file level cloning?

I've always been a block level kinda guy but I'm interested in hearing some real world experiences with file level cloning. What are some of the advantages and disadvantages as well as what tools work the best.

  • Well the most obvious advantage of file-level cloning is that you don't waste time cloning unused blocks. Eg a clone of a 40G partition with 10G of data will require 40G of reads and 40G of writes on the block level, but close to 10G of reads and 10G of writes on the file level.

    One minor benefit of file-level cloning, is that it effectively perfectly de-fragments your filesystem at the same time, whereas block-level cloning clones fragmentation as well.

    Block-level cloning is simpler, and you don't have to worry about any kind of permissions or other issues, you know for 100% certain the clone will be identical to the original, but it's possible for file-level cloning to go wrong if you mess up your settings.

    Mark : If you have a tool to do block level cloning that is aware of the filesystem structure, it can skip unused blocks, and so it may not need the full 40GB of reads/writes. I'm thinking zfs send/receive here, but I'm sure there are other filesystems or tools that do something similar.
    womble : If it's aware of the filesystem structure, it's a file-level cloning tool.
    From davr
  • My worst experience with file level cloning was a 20Gig NT4 partition with about 1.6m tiny files. Transfer rate would have been ~8Meg/sec with block level cloning (over a 100Meg network) and should have taken somewhere between an hour and an hour and a half, it ended up at <150K/sec because of all of the file system\permissions overhead and took almost two days.

    From Helvick
  • As people have said use block level when the hit on the file meta data is too great. Use file based when there are not many files.

    I am used to a block replication system that only replicates blocks that have changed and are allocated to files. This can work very well.

    File based replication is cheap and easy to do on an open system however rsync/unison scripts need more maintenance than replication on a NAS or a SAN.

    If there are millions of files then block level is the only way to go, we have a number of filesystems that have 40 million files in 600GB and file based replication is not going to work there.

    From James

TrueCrypt or EFS?

A subset of my users need a way to share an encrypted folder on the file server. Security is the most important, followed closely by ease of use. It appears that TrueCrypt is easier to set up. Does EFS have any advantages over TC to justify the extra setup?

Windows Server 2003 and XP, Active Directory, 100 user LAN.

Edit: I originally missed the limitation of single-user R/W access for Truecrypt. Looks like EFS is better once I get past the setup.

  • The section on Sharing over a Network from the TrueCrypt user's guide makes it look like you have a couple of solutions-- mounting the shared file hosting the volume locally on computers or mounting the file hosting the volume on the server computer. The big difference between the two is that the volume's contents will be accessible read-write to all client computers when it's mounted on the server computer and shared (albeit access to the data will cross the wire "in the clear") versus the volume being mounted read-only on all computers when mounted locally on each machine.

    If your users need seamless read/write access to the encrypted files either a TrueCrypt server-side mount or EFS is probably a better choice. The data is still going to cross the wire in the clear with EFS, as with TrueCrypt and the server-side mount.

    Some people get really down on EFS but I think it fills a niche and solves a problem. It's well designed for what it is, but the problem that it seeks to solve is fundamentally awkward to solve.

    Configuring EFS in an AD envrionment really isn't too difficult to setup. The most difficult part is wrapping your mind around the recovery agent functionality and exporting the recovery key to a safe offline location. You will need a PKI, but Microsoft's Certificate Services can automate most of the process for issuing certificates to users (have a look here for information about autoenrollment in Windows XP: http://technet.microsoft.com/en-us/library/bb456981.aspx)

    Have some a look at the docs from Microsoft: http://technet.microsoft.com/en-us/library/cc962122.aspx (and another at http://technet.microsoft.com/en-us/library/bb457116.aspx)

    Multi-user access to EFS files is a bit of a "wart" on the part of Microsoft, but it's not too hard to deal with. There's a very good answer here re: multi-user access to EFS-encrypted files.

    Nathan Hartley : I thought I read that data to and from an EFS share WAS encrypted. [time passes] Ah! It is encrypted on-the-wire when ran in WebDav mode... Remote EFS Operations on File Shares and Web Folders http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA
    Jim B : on a side note if you want all traffic encrypted you simply need to enable domain isolation.
    Evan Anderson : @JimB: You're absolutely right in the sense that you should use an over-the-wire encryption mechanism, such as IPsec, if you want over-the-wire encryption of any data, EFS-stored or otherwise. The "domain isolation" term was always one that rubbed me the wrong way-- sounded like a marketing-ism.
    Jim B : @Evan- It's a pretty accurate term. If you are not a domain menber you can't see any traffic in that domain. There are a couple of downloadable labs on the technet site to play with it.
  • EFS will allow you to use your existing AD and kerberos credentials to access the encrypted data.

    Truecrypt doesn't support multi-user access, and has no way of storing access credentials in a directory. Additionally, Truecrypt hasn't been FIPS 140-2 validated, so if you are encrypting to protect yourself against breaches of personally identifying information it isn't the right tool.

    Also consider commercial products like McAfee File & Folder encryption.

  • I'd recommend going with TrueCrypt for this scenario. Its probably going to be easier than EFS in this case.

    From KPWINC

Retrieve operational attributes from OpenLDAP

I've been having trouble trying to find some good documentation on how to retrieve operational attributes from OpenLDAP.

I would like to retrieve the base distinguished name of an LDAP server by doing an LDAP search.

How come my search doesn't work when I explicitly ask for namingContexts attribute? I've been told that I need to add a plus ('+') sign to the attribute list.

If this is the case, should I get rid of the "namingContexts" attribute or have both?

ldapsearch -H ldap://ldap.mydomain.com -x -s base -b "" +
# note the + returns operational attributes

Edit: Note how it looks like the attributes requested are empty. Shouldn't the plus sign be in the attribute list? http://www.zytrax.com/books/ldap/ch3/#operational

reference: plus sign operator with OpenLDAP

  • How come my search doesn't work when I explicitly ask for namingContexts attribute?

    What is not working? Do you recieve an error?

    When there is a plus sign it returns all the attributes, regardless if namingContexts is added.

    Using:

    ldapsearch -x -H ldap://ldap.example.com -s base -b "" namingContexts
    

    Returns:

    # extended LDIF
    #
    # LDAPv3
    # base <> with scope baseObject
    # filter: (objectclass=*)
    # requesting: namingContexts 
    #
    
    #
    dn:
    namingContexts: o=example.com
    
    # search result
    search: 2
    result: 0 Success
    
    # numResponses: 2
    # numEntries: 1
    

    It is also listed using:

    ldapsearch -x -H ldap://ldap.example.com -s base -b "" +
    

    Returning:

    # extended LDIF
    #
    # LDAPv3
    # base <> with scope baseObject
    # filter: (objectclass=*)
    # requesting: + 
    #
    
    #
    dn:
    structuralObjectClass: OpenLDAProotDSE
    namingContexts: o=example.com
    supportedControl: 2.16.840.1.113730.3.4.18
    supportedControl: 2.16.840.1.113730.3.4.2
    supportedControl: 1.3.6.1.4.1.4203.1.10.1
    supportedControl: 1.2.840.113556.1.4.1413
    supportedControl: 1.2.840.113556.1.4.1339
    supportedControl: 1.2.840.113556.1.4.319
    supportedControl: 1.2.826.0.1.334810.2.3
    supportedExtension: 1.3.6.1.4.1.1466.20037
    supportedExtension: 1.3.6.1.4.1.4203.1.11.1
    supportedExtension: 1.3.6.1.4.1.4203.1.11.3
    supportedFeatures: 1.3.6.1.4.1.4203.1.5.1
    supportedFeatures: 1.3.6.1.4.1.4203.1.5.2
    supportedFeatures: 1.3.6.1.4.1.4203.1.5.3
    supportedFeatures: 1.3.6.1.4.1.4203.1.5.4
    supportedFeatures: 1.3.6.1.4.1.4203.1.5.5
    supportedLDAPVersion: 2
    supportedLDAPVersion: 3
    supportedSASLMechanisms: DIGEST-MD5
    supportedSASLMechanisms: CRAM-MD5
    subschemaSubentry: cn=Subschema
    
    # search result
    search: 2
    result: 0 Success
    
    # numResponses: 2
    # numEntries: 1
    
    From
  • What version of OpenLDAP are you using? What does "doesn't work" mean precisely? What is the output when you run that command?

    I ran it on my OpenLDAP instance and it produced output similar to carrell's.

    I'm wondering if it may be a permissions issue. Perhaps anonymous users don't have read access on dn="" or access to the operational attributes in question?

  • First access rule in my slapd.conf is explicitly to make sure that this is permitted; make sure you have something similar:

    # Let all clients figure out what auth mechanisms are available, determine
    # that TLS is okay, etc
    access to dn.base=""
            by *            read
    
    From Phil P

Positive vs. negative monitoring

Ive been looking at monitoring for a while. My org didnt have any before i came other than 'whered my yahoo go'. It appears that most packages out there focus on negative monitoring (ie, this service/host was up and now its not). This seems like a valid first step, but what can you look at past that for positive monitoring (ie that port wasnt up, and now it is, or hey look thats a new DHCP host)? I suppose its possible to have a declaration for every single port/network address in nagios, but that seems cumbersome.

Does anyone know of a better tool for monitoring ports/hosts for affirmatively down?

  • Nagios includes a wide range of plugins and modules for active/intrusive monitoring and passive monitoring. It should include everything you need!

    From Aiden Bell
  • What you are looking for isn't really monitoring as much as it is security. I am not a security expert, but there are a number of network scanning tools out there that can be "taught" what to expect and then will tell you if something is out of the ordinary.

  • For hosts that you know about, Nagios/Zenoss/OpenNMS are your best bet - they can be configured to notify when hosts and/or services go down, or come back up. They're mostly smart enough not to start alerting about ALL the services on a host if the host itself's down, as well; it's important to configure these sorts of things properly, so that you don't get deluged with 20 alerts because of a server reboot. If there's that much information about trivial stuff, sooner or later you'll end up pretty much ignoring it and missing something important.

    For the second half of your question, Catherine's right; you're looking at an Intrusion Detection System (IDS). These can be configured to know what your network should look like in terms of hosts, topology, traffic types and so on, then alert you if anything other than what you've defined as "ordinary" happens. A couple of examples would be Snort and OSSEC.

    From RainyRat
  • We use nmap for this. We have a simple script wrapping nmap that scans our entire network and stores the XML output. The next night it runs again and compares the output. If any new hosts or ports show up, an email is sent to the admin staff.

    The just-released Nmap 5.0 includes a utility for just this purpose called Ndiff.

    From Insyte
  • For your specific questions, I'd use something like arpwatch to watch for changes in ARP addresses and portsentry to watch for anyone trying to connect to unused ports. You could use other tools as well.

    These tools can then be integrated into an active or passive check for Nagios.

    Saurabh Barjatiya : arpwatch will work only for subnet in which host is running and portsentry is probably for protecting individual host when it detects port scan.
    From David

mobileadmin with exchange 2003

We have several users connected to our exchange server using activesync using a variety of devices. The documentation for mobileadmin appears to indicate that the app should provide a list of active devices somewhere, but i cant seem to find it. Is there a config change i need to make, or are the docs inaccurate?

  • I think are misreading it. you get:

    View a list of all devices that are being used by any enterprise user

    Select/De-select devices to be remotely erased

    View the status of pending remote erase requests for each device

    View a transaction log that indicates which administrators have issued remote erase commands, in addition to the devices those commands pertained to

    ANervousTwitch : thats probably correct, since thats the functionality thats there now. it seems easily read to mean a list of all devices though.
    From Jim B

Escalate a notification regardless of timeperiod

My oncall rotation defines time periods based a persons geo graphic location. But our escalations need to go out to the entire team regardless of when they occur. Currently the only way I've found to configure this in nagios is to create two contacts for each person. One that is a specific timeperiod the other that is 24x7 then use the 24x7 contact in the escalations. I'd like to be able to only maintain 1 contact per person.

define contact {
    contact_name                        bobjones
    service_notification_period         ops-shift4-oncall
    host_notification_period            ops-shift4-oncall
    host_notification_options           d,u,r
    service_notification_commands       service-notify
    host_notification_commands          host-notify
    email                               bjones@foo.com
    pager                               bjones
}

define contact {
    contact_name                        bobjones_24x7
    service_notification_period         24x7
    host_notification_period            24x7
    host_notification_options           d,u,r
    service_notification_commands       service-notify
    host_notification_commands          host-notify
    email                               bjones@foo.com
    pager                               bjones
}
  • Have you checked out http://nagios.sourceforge.net/docs/1_0/escalations.html?
    I think you can just add the one user to multiple contact groups to do what you want.

    Stick : correct but I need to add every user, right now I have a group of contacts and a duplicate group of 24x7 contacts, which gets the escalations, but it's the same list just with different a different time period. That duplication is what I'm trying to avoid.
    Catherine MacInnes : what do you use the first set of contact definitions for?
    Stick : The first set defines the appropriate time period for that person, for example me I'm on the east coast for us that's say 1pm->5pm during that time I should get the first notification. However if I don't answer it should escalate to everyone regardless of what timezone they are in.
    From Keith
  • You can use escalation_period to define when an escalation goes out. So you can do something like

    define serviceescalation{
      hostname              host.example.com
      service_description   this service
      first_notification    1
      last_notification     1
      notificaiton_interval 15
      escalation_period     opps-shift4-oncall
      contact_groups        shift4
    }
    
    
    define serviceescalation{
      hostname              host.example.com
      service_description   this service
      first_notification    1
      last_notification     1
      notification_interva  15
      escalation_period     opps-shift3-oncall
      contact_groups        shift3
    }
    

    etc.

    Then make sure that the contact groups contain the appropriate people. That sends the first notification to the appropriate group depending on the time period. So you create a dummy group "no one" or something similar and put that in the service declaration, so that it doesn't actually send to anyone at all.

  • It might not be appropriate for your situation, but I got around having multiple contacts for the same person in Nagios by using distro groups. Instead of setting up an individual, I setup up groups in exchange. The Nagios contacts never change, but the distro groups fluctuate all the time.

    From bread555

How can I connect my Apple iMac to my company's VPN network?

I'm looking for the easiest way to connect to my computer and servers at work (all using Windows) from my iMac (using Mac OS X.5) at home, without having to use a VNC or LogMeIn.

It's a small network using a Router with VPN.

Thanks.

Addition: VPN PPTP is supported (Why is it so important to know?)

  • What type of VPN? I assume probably over PPTP if so then Mac OS X has a built in PPTP client. You can find more info here Mac OS X PPTP

    waszkiewicz : That was a quick and clear answer! Thanks a lot.
    From TimK
  • Edition: VPN PPTP is supported (Why is it so important to know?)

    In short: Because the built-in VPN client supports it. :)

    There are several vpn-protocols plus some vpn-server vendors add proprietary features to the protocols they support. Like Cisco added the "Mutual Group Authentication" to the IPSec protocol. VPN clients that support IPSec but not support Mutal Group Auth cannot connect to a Cisco server if that feature is enabled and set as mandatory.

    waszkiewicz : Ok, very clear! Thank you for that explanation. I thought it was abvious to have a PPTP VPN. Can't know everything!
    From openfkg
  • Note that OS X has a different default than Windows when it comes to redirecting all traffic over the VPN. The remote VPN server may or may not actually allow such traffic. Windows by default sends all traffic over VPN, but a Mac does not. (On a Mac, the Advanced button shows an option "Send all traffic over VPN connection". Windows has a simular option.)

    When all traffic is sent through the VPN, then even "normal" web browsing is done through the remote VPN server. This might be useful when traveling in some countries that filter certain web sites. As the VPN connection is encrypted, using it for all traffic might also be more secure when not knowing who can listen in to the network (hotel, internet café, Starbucks WiFi, ...) that is used to connect to the internet. And it might be a bit more secure as any "hacker" (or spyware) who has gained access to the workstation will be disconnected as soon as the VPN connection is started. It depends on the security of the VPN server whether or not such unwanted traffic could be re-established through the VPN server.

    From Arjan
  • In my case I had to go as far as installing a VM of one of our laptop builds onto my mac. We're locked down pretty tight and there are some dependant resources that need to be installed and running before our VPN server will accept a connection and authenticate into our network.

    From OhioDude

Xen image file vs partition/LVM volume performance

Hi.

I read quite a lot of advice to switch from file-image VM storage, to partition/LVM volume based.

The claim is that partition/LVM are much faster then image files.

The downside in my opinion, is that one no longer have the whole VM in a single, easy to copy and migrate file.

Can anyone advice on this, especially if there indeed any difference in new versions of Xen, and if there are any IO benchmarks to support it?

Thanks!

  • Creating a block level access to virtual machine state, as opposed to a file level access will always be faster because there is a layer of abstraction removed.

    I would recommend the LVM approach. Don't forget, you can always backup the LVM volume just like a file. There isn't much difference between the two. LVM is also quite flexible in terms of relocating the data.

    Just because the abstract notion of a file doesn't exist anymore doesn't mean it is bad. The performance gains may be considerable, and with a little bit of broad thinking you can plumb your infrastructure just like it was a file.

    I often make a partition for QEmu virtual machines. Then I can use dd to save and restore it. One file system (the virtual machines) running down to block level is better than a file in a filesystem with a filesystem atop.

    Good luck

    : Thanks for the explanation :)
    From Aiden Bell
  • There are a few (I came across maybe two of them) benchmarks of file image vs. LVM partitions on the net (it's not that hard to google them). Although somewhat dated, it would seem that LVM is usually faster (if by a tiny margin). That was enough for me, so I went with LVM schema. As far as the copying goes, you still can mount the LVM logical volume, targzip it and transfer it to another location. It's not that much harder. And LVM makes it much easier to expand your server storage.

    From ryidle
  • I'll just add to all the answers above, by reminding you that LVM has a somewhat easy to use snapshot mechanism. This makes it pretty easy to backup or clone running VM's by simply making a snapshot, cloning or backing the VM up, and removing the snapshot. All without downtime.

    Aiden Bell : +1 for snapshots. I think that functionality gets missed alot!
    From katriel

How do I run a batch file async using PSExec?

I have a batch file I run that, among other things, reset's the NICs in the machine. I have some watchdog software running on another machine that monitors the first one. I'd like to run this batch file using PSExec when it detects certain types of failures. The problem I'm having is that since the batch file reset's the NIC's it kills the connection PSExec has (I'm OK with that). The real issue is that when PSExec dies the batch file stops running (leaving the NIC's disabled).

I've tried using the -i option with PSExec with no luck. Any ideas about basically just fire off the batch file and NOT have it stop when PSExec is disconnected?

  • And, as usually, I figure things out 10 minutes after asking the question. It turns out I had the parameters in the wrong order. Here's what worked:

    psexec \\MyServer -i -d C:\Misc\ResetNICs.bat
    
    Nathan Hartley : Ha! I should have looked a little harder at what you were doing. It would have saved me 15 minutes worth of fixing something that wasn't broken. =)
  • Use the START command to run things asynchronously. The trick here though, is that START can not be called directly through PSExec. Calling it through the command interpreter will get around this limitation. Like so...

    psexec \\RemoteMachine cmd.exe /c start c:\test.bat
    

    Or if you want more power, you could call psexec on the remote machine with the -d parameter, similar to the START command above.

  • How about using the SOON.exe from the Windows Resource kit to schedule the batch file in a few seconds.

    From ggonsalv

Jabber Management on Linux

Hello!

Does anyone know of a GUI for the configuration and management of a Jabber(2) server on Linux? Shows logged-in users, allows you to ban/remove users, general configuration and monitoring?

Looking for recommendations from people who have used it!

Thanks in advance!

  • You can check out webmin. They have a Jabber module. I have never used it, but it is there. Good luck, hope that helps.

    Aiden Bell : Thanks RascalKing, will check it out. Not sure if webmin is overkill tho :)
    From RascalKing
  • The Openfire server is an Jabber server that has a pretty slick GUI built in:

    http://www.igniterealtime.org/projects/openfire/index.jsp

    I used this for over a year at a previous position and never had any interoperability issues with the common desktop clients or with iChat.

    It's GPL licensed, though they have some other products (also Jabber based I believe) aimed at businesses.

    From James F

Media wiki on windows server 2k3 backup/restore and upgrade

I just inhereted our companys mediawiki and

1) what files need to be backed up to be able to restore it to a diffrent machine

2) i see there is an update to 1.15.1 I looked over the site but i don;t see any instructions for upgrading a windows based install

thanks if you can point me in the right direction.

  • Between these three pages you should have enough information to build out the backup you need:

    http://www.mwusers.com/forums/showthread.php?p=13894

    http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki

    http://www.mediawiki.org/wiki/User:Flominator/Backup_MW (for the windows restore script)

    From Keith
  • MediaWiki stores its information in two locations:

    1. The "mediawiki" folder of the website. This folder holds all uploads and "engine files" for the wiki.
    2. The database. MediaWiki uses its database to store all articles, histories, user information, and so on.

    Knowing this, you can move or backup a MediaWiki installation by copying over the "mediawiki" folder and dumping the relevant database. I assume you are using MySQL for the database (that's the default for MediaWiki anyway). If you aren't familiar with backing up MySQL databases, look up the mysqlhotcopy and/or mysqldump commands. Those both create a MySQL "batch file" holding instructions to recreate the data you dumped.

    NOTE: In Linux, MediaWiki also keeps some configurations in a separate /etc/mediawiki folder. I'm not sure if MediaWiki separates its configuration under Windows as well. It's more common for Windows programs and utilities to put everything in one folder, but you might want to check on that.

    From DWilliams

Ad server software for high traffic site

The company I work for runs a web site that gets a little over fifty million hits a month. We are looking at replacing our existing custom-written ad server with a prepackaged solution. It fundamentally must be able to scale up to this level of traffic, which eliminates a substantial number of solutions.

Our servers run Linux and so any ad server would obviously need to run in Linux.

Anyone have any software and (less important) hardware recommendations for ad serving that can easily handle 50,000,000 hits per month, and that can scale up modestly into the low nine figure hits/month? Happy to hear about Windows solutions, though we definitely will not be running those.

  • Does it need to be a standalone solution? If not, I've had great experiences with Google's Ad Technology, which I believe the "premium" solutions allow you to have custom ads, not just AdSense text.

    ChrisInEdmonton : Doesn't need to be standalone, but keep in mind that our traffic is fairly significant, even when looking on a per-webserver basis. Google Ad Manager is certainly worth a look. Do you have any reason to believe this could handle the traffic?
    Nalandial : Google builds everything for scalability. I'd think they would be able to handle it better than the rest.
    From Nalandial
  • Give OpenX a look at. They have a free hosted solution for upto 100 million impressions a month. Or you can host your own Ad-Server. I used there phpAdNews back in 05 and it was a very powerful and great ad tool. They changed there name at some point...

    ChrisInEdmonton : I hear bad things about OpenX. Basically, it simply doesn't scale up to close to the level we need. See for example, http://www.techyouruniverse.com/software/good-bye-openx-hello-google-ad-manager
    From xeon
  • We were running OpenX for a while, eventually got 0-day'd and ze russians installed a php rootkit. I suppose it can happen to any software, just thought I'd mention that you should plan to run it in isolation.

    From

MySQL not resolving IP to hostname to check for priviliges

I'm testing our backups and I'm running into problems with the mysql accounts. I can't log in from one (restored) server to another (restored) mysql server. The logs show me that it's denying the user 'apache_auth'@192.168.0.120, whereas on the priviliges table the user is in as 'apache_auth'@myhost.internal.example.com. However if I ping myhost.internal.example.com from the mysql server, i can see that it's getting the IP address of 192.168.0.120. How come it's not doing the reverse?

  • Check your MySQL config file (like /etc/my.cnf) and see if your db server has skip-name-resolve enabled. More info: http://dev.mysql.com/doc/refman/5.0/en/dns.html

    Also, tail the error log (specified and enabled by log-error) or the warning log (log-warnings). More info: http://dev.mysql.com/doc/refman/5.0/en/server-options.html. I don't remember which one of them would have logs on denied access.

  • This was my own fault. we're using an internal 192.168.0.x network, with our own nameserver to resolve 'db1' to 192.168.0.x. However I haven't set up the reverse ARPA / PTR entries. Which obviously means it's not resolveing IP addresses.

How can an attacker gain root next time a compromised account does?

I was reading Ubuntu's documentation about root/sudo when I came across the following:

  • Isn't sudo less secure than su?

    The basic security model is the same, and therefore these two systems share their primary weaknesses. Any user who uses su or sudo must be considered to be a privileged user. If that user's account is compromised by an attacker, the attacker can also gain root privileges the next time the user does so.

How can an attacker gain root privileges the next time the user does? Assuming sudo is disabled.

  • I think the case you are asking about has more to do with privilege escalation via su. Since it may be that only the user's password is compromised, the attacker cannot esacalate until he also has root's password. If the attacker installs a keylogger program or replaces su or something like that, he/she will be able to obtain the root password the next time the privileged user types it.

  • sudo -s executes the users .bashrc. If you have access to that users account you can add lines to this bashrc that will be ran as root.

    # ~/.bashrc
    cp /bin/bash /bin/something_else
    chmod 4755 /bin/bash
    

    I'd add something like that in, create a setuid copy of bash, so that I could run it later.


    Edit: the question now seems to ask about cases where sudo isn't used.

    First trick that comes to mind if i wanted to root privileges from a user using su would be to modify their path.

    # echo $PATH 
    /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
    $ export PATH=/tmp:$PATH
    su
    # echo $PATH 
    /tmp:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
    

    That's running fedora 11 with bash 4. Configs for su, shell environment stuff are pretty much default.

    As you can see, I was able to change the path as the regular user, and this path wasn't reset by su (note su - would have reset it). Change their path in their shell rc, then put my own script into the new directory at the top of the path. Make a few copies (or symlinks) of it with names like ls, cp, mv, things that get ran often.

    #!/bin/bash
    # make a shell for later
    cp /bin/bash /bin/something_else
    chmod 4755 /bin/bash
    # cause more trouble
    ...
    # now run the real command so the user doesn't notice
    PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
    exec $0 $*
    

    Anyway, these are just examples, there's undoubtedly other similar scenarios. I think the point is that accounts which can su or sudo are something to be careful with.

  • If a user's password is compromised and that user has sudo privileges, an attacker could run sudo su, and become root.

    Swoogan : Actually, I'm referring to the situation where sudo is not enabled.
    thepocketwade : sorry, I missed that part of the question.
  • It's very simple actually. sudo is typically set to cache its authentication so you don't have to enter your password every time you use sudo on a command. I believe the default cache time is something like 5 minutes.

    Thus if the attacker has access to a user account with sudo access, even without knowing their password, they can just wait for the next time the user performs a sudo. The attacker can then run sudo right afterwards and get a root shell.

    I presume this is the scenario they are referring to when they mention "the next time the user does".

    Swoogan : Actually, I'm referring to the situation where sudo is not enabled.
    Jeff Atwood : note that the post says ASSUMING SUDO IS DISABLED!
    Kamil Kisiel : The poster didn't add that part to the question until after I already answered it.
  • Although sudo is commonly used together with su command, thinking sudo su is the only way to do is a mistake.

    sudo has many options for setting what to execute by who (ether user of group of users) on which host. A fine-grained sudoers file (or LDAP entry, if sudo-ldap is involved) together with a clever mind of a sysadmin can end up in rules which may not compromise system security even the user account has been compromised.

    let's see a real-word example:

    $ sudo -l
    User exampleuser may run the following commands on this host:
        (root) /opt/xmldns/gen.sh
        (root) /usr/bin/make -C /root/admin
        (root) /usr/sbin/xm list, /usr/sbin/xm dmesg
        (root) /usr/sbin/zorpctl stop, /usr/sbin/zorpctl start, /usr/sbin/zorpctl status
        (root) /etc/init.d/apache status, /etc/init.d/apache stop, /etc/init.d/apache start
        (root) /usr/local/bin/protodump.sh httpreq
        (root) /usr/sbin/xm console
    $ 
    

    If one does not let user sudo-exec su/bash or other shell neither directly (sudo su) nor indirectly (letting an editor spawn with root, which could be used to spawn shell - in this case, root), sudo is a friend of a system administrator and users too.

    Returning to the question in topic, if sudo is disabled and su is the only way becoming root on a system, one would plant a fake su command (for example in ~/.../fakesu) and an alias like alias su='~/.../fakesu' in the rc file of the user's login shell.

    In this case a simple su command (raise hands, who uses /bin/su for invoking) would end up calling the fakesu command, which may capture the password.

    Babu : +1 for the suggestion of only enabling commands that are needed
    From asdmin
  • The principle is simple. Sudo only requires the user's password to perform activities as root. If someone was able to break into that user's account, he probably knows the password (or can easily figure it out).

    With su, the case is a bit different, because it requires root password. However, an attacker could change the user's PATH to point to its own version of su (inside tmp or ./bin) that will save the root password somewhere.

    Now you added that sudo is disabled. The link you provided don't talk about that. They mention the case where sudo or su are configured for that user and an attacker uses that as leverage to get root.

    Swoogan : The link I provided says, "Any user who uses su or sudo." I specifically want to know in the case of su since I already know how sudo can be compromised, since it is blindly obvious.
    From sucuri

Printer issues using Citrix?

We have remote users using Citrix using CRM 4.0 for Outlook 2003. The CRM 4.0 Application is customized and connects to a sharepoint document library. When the user tries to print from the document library within CRM 4.0 the pages come out black. What could be causing this issue.

  • Are you using the Citrix UPD (Universal Print Driver?)

    From Dan
  • Seen this with HP drivers, see if you can swap between PCL5 and PCL6 driver. Also does this document happen to landscape?

Word Document Turns to Read-Only

I am running into an issue with a user whose Word document is somehow turning itself into Read-Only. The user is using Word 2003 and is accessing a document that is in a Server 2008 share. The document itself starts out as a normal, editable document (user has Full Control permissions), and the user is able to save and do the 'normal' things you would do to a document. However, after a couple of saves, the document turns to Read-Only (according to the title bar) even though the Read-Only attribute is not checked on the document's properties.

Here is some additional information about the situation:

*User has approximately 5-8 Word documents open at a time

*User saves the document frequently (sometimes at a frequency of once per minute)

*Once the document is closed it will open as a normal document if reopened

*When the document does turn to Read-Only the user will do a "Save As" on the document and save it as FILENAME # where # is some increment of how many times this has happened (some documents are up to their 30th iteration)

I understand that there is probably some room for user education here and that they could just be copying the RO document to a new one, closing and opening the RO doc, then copying all the information back. However, I would like to get to the route cause of the problem and try to stop it from happening in the first place.


UPDATE: Apparently the reinstall did not fix the issue. I researched the issue a bit more and found that disabling the background save may take care of it, but I haven't had a chance to try it yet. Does anyone else have any other ideas?

  • In the interest of helping you resolve your problem here's a resolution I found at WordTips:

    The only way she was able to get around the problem was to turn off the automatic backup file feature in Word (Tools | Options | Save tab, clear Always Create Backup Copy) while working in that document.

    Have you considered implementing a real-time office document sharing plugin like DocVerse? Note: As this product is currently in beta it only supports PowerPoint but will support all Office docs in the future. I used it as an example for implementing a more integrated way of sharing docs across a network.

    Also, this Read-Only hassle could be related to a network permissions issue as opposed to a document permissions issue since it involves files on the share. I'm assuming the user doesn't experience this issue when they save and edit files locally.

    Psycho Bob : The documents that the user accesses and saves in this way are not stored locally, so I couldn't speak to that user's ability to work with local documents in the same manner.
  • We started experiencing the same issue with Office 2007 (both Word and Excel docs) a while back. It happened in a number of scenarios, as well. Editing a doc stored locally (both on XP and a Win7 build), a doc stored in SharePoint 2007, a doc on a file share, etc...

    Luckily, when we started having this issue, Office 2007 SP2 was only about a week or two away from release. Once it came out, I installed SP2, and since then the issue has not come back (knock on wood).

    I know you are running Office 2003, but I would at least make sure its fully patched and see if that helps. Also, check with Microsoft to see if there is a hotfix available that hasn't been released as a standalone patch or rolled into a service pack yet.

    From dustin
  • It appears that going into the normal.dot and disabling background saving does the trick.

    From Psycho Bob
  • This problem is not peculiar to Word 2003. I get it on Word 2007 as well, and it occurs on network shares and local files. I found this thread looking for any information about it. I will attempt turning off the "Save Auto-recover Information every.." option, and see if that works. It is ironic that one must turn off the ability to automatically save a document in order to be able to save it on purpose.

    From David

Data Guard Status: ORA-16764 after switch to log transport services

Howdy,

I originally set up our physical standby database with redo transport services. Now, I'm switching to log transport services to reduce the time in which the standby lags production. I set up a new log_archive_dest_n to use LGWR ASYNC, enabled it, and deferred the old log_archive_dest_n. Everything seems to work: Enterprise Manager Data Guard reports log transport services are being used, and the apply lag time is now around 20-30 seconds. I'm happy with that. What bugs me is that the primary database insists on reporting "ORA-16764: redo transport service to a standby database is offline". I realize it's offline; I took it offline because if it's online, Oracle insists on using redo transport instead of log transport. If I remove the dest_n parameter entirely, I get a Data Guard status of ORA-16777.

Is there a way to get rid of the error messages?

  • It appears the issue is resolved. Oracle support recommended deleting and re-adding the standby from the data guard screen in Enterprise Manager. I did that, and at first the issue appeared resolved. Soon, however, a new error, ORA-16778 began to appear. After a bit of tail-chasing, I realized that the Data Guard process had re-added the log transport service to the initialization parameters, creating a duplicate of the one I had already added. Removing the entry that I had created and leaving the DG added one has the problem in remission. Thanks for looking.

    From DCookie

Best practices or experience with company wide Username policies and resolving duplicates

I am a programmer with an application that needs to be integrated into the new company wide Active Directory login scheme. This means changing all the usernames in our system to use the new scheme. The new scheme is "first initial, last name", so Joe Smith would have a username of jsmith. If John Smith now gets hired, he'll get jsmith2. BUT as soon as Joe leaves the company, his AD account is deleted, and jsmith is available again. So if Jill Smith now is hired, she would get jsmith. From an applications standpoint this causes problems in my view, because I could now have records relating to Joe and records relating to Jill that are indistinguishable, because they were both created by "jsmith".

I am therefore left to wonder if there is a standard or best practice that addresses this issue of reusing usernames in an organization wide directory, especially in larger companies. When bringing up my concerns at a meeting I was told that "there's no way [big company name] still has a record of every user that's left the company", and that struck me as crazy. So, is there a generally accepted solution to handling usernames? Or does every company make it up as they go?

  • Windows copes with this by using a GUID to identify every account. The username is just decoration. You'll find that the old jsmith and new jsmith have different GUIDs even though the usernames are the same.

    Can you associate a GUID with each account in your app? If I think about it I can probably tell you how to get at the GUID for a user. It will be an attribute of the user in active directory.

    JR

    Les : TRY to make the GUID the same as the user name. In your scenario, when Jill hires in, she gets the user name jsmith and the GUID jsmith3 (since the others are still in use). Just an idea - there may be implementation issues that could be problematic. For example, should a lookup of of GUID jsmith return 'no such user name'?
    Evan Anderson : The GUID is assigned by Active Directory and guaranteed to be unique in the forest by AD. If you're doing a bind or search against AD using the GUID in the search filter you'll receive no results back assuming that no object has that GUID assigned.
    Peter : ahhh, I might have misled everyone by using the term "my app". It is not so much "my app" as the "app i am responsible for enhancing/supporting". It is closed source software bought from a vendor. There is no room on the user record for any identifier besides a username, which everything else in the DB is tied to, including all the audit tables, security, etc. So for this to work, i'd need to force the user to log in using the GUID, but that breaks the desired policy as well.
  • There are several naming attributes in Active Directory:

    • sAMAccountName: This is the up to 20 character name that must be unique within the Domain but not within the forest.
    • userPrinicipalName: This is usually in the form of sAMAccountName@domain.name and I believe needs to be unique within the forest, but since the first part is unique within the domain, then @domain.name that should just happen.
    • displayName: What you see in ADUC MMC when you look at users.
    • DN: The actual LDAP DN of the user in the AD tree. This is how you would identify the user from an LDAP perspective, usually.

    DN is a bad one to key off of, since it will change if the user is moved or renamed. Sane is possible for userPrincipalName/sAMAccountName (if they get renamed).

    As another poster suggested, the really truly unique attribute of users is probably GUID, and is not nice and human readable, alas.

    From geoffc
  • Yes we keep every user ever hired. I usually reccommend a first initial,middle initial,lastname to minimize the number of JQPublic1 accounts but it happens, however from an AD perspective users are just numbers. You can see the number for any account with this script:

    strComputer = "."
    Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
    Set objAccount = objWMIService.Get ("Win32_UserAccount.Name='myusername',Domain='mydomain'")
    Wscript.Echo objAccount.SID
    

    as a free bonus here's how to turn a sid into a username:

    strComputer = "."
    Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
    Set objAccount = objWMIService.Get ("Win32_SID.SID='S-1-5-21-1454471165-1004336348-1606980848-5555'")
    Wscript.Echo objAccount.AccountName
    Wscript.Echo objAccount.ReferencedDomainName
    
    Matt Simmons : I'm very much considering this, as well. I've got a specific OU that I put my "active" users in that permits things like VPN login, in addition to their belonging in the custom "Employees" security group I created. I've considered an "inactive" OU for users' accounts that no longer have any permission to access the system.
    From Jim B
  • As others have mentioned, using the GUID attribute of the user's Active Directory account is a great idea. If you want human-readability, though, you should have a look at the docs for the iADsNameTranslate interface. You can get a lot of mileage out of it for dealing translating the various possible names of an AD account (GUID, SID, samAccountName, displayName, DN, etc).

    Example:

    Option Explicit
    
    ' Constants for the iADsNameTranslate object. (from http://msdn.microsoft.com/en-us/library/aa772267(VS.85).aspx)
    Const ADS_NAME_TYPE_NT4 = 3
    Const ADS_NAME_TYPE_GUID = 7
    
    Const ADS_NAME_INITTYPE_GC = 3
    
    Dim objNameTranslate 
    Dim strUserGUID
    
    ' Create a nametranslate object and init to talk to a global catalog server
    Set objNameTranslate = CreateObject("NameTranslate")
    objNameTranslate.Init ADS_NAME_INITTYPE_GC, ""
    
    ' We're looking for an "NT 4" account name type-- aka a samAccountName
    objNameTranslate.Set ADS_NAME_TYPE_NT4, "DOMAIN\username"
    
    ' Translate into the user's GUID
    strUserGUID = objNameTranslate.Get(ADS_NAME_TYPE_GUID)
    
    WScript.Echo strUserGUID
    

    This isn't just for user accounts. Every object in AD has a GUID, so if you need to "remember" a DN for, say, an LDAP search base (or a group, or anything) you can use the GUID such that if it gets moved around in AD (think about some admin going off and re-organizing OU's, or renaming groups) your "pointer" to it won't break (because the GUID never changes).

  • GUID is definitely the way to go. If you really need to present the person's name in a human-friendly format just do an AD lookup in code.

    From mh
  • Other have already posted script snippets and you'll find plenty of code snippets, for a variety of languages at the usual coding sites. Try searching those for things like "user to guid", "user to id", etc.

  • IMO, good practice is to never delete accounts, just disable them. that way they can't be re-used.

  • Peter, my own experience with this is - don't expect the username to be unique, unless you have a way to guarantee username uniqueness. As you say, it never really works out.

    The app, whatever it is, needs to use other relevant data to determine what makes the user unique within the context of whatever the app is doing. If you're doing an app to generate mail flyers, for instance, you probably don't want to send more than one flyer per unique postal address.

    Peter : unfortunately the app i support/enhance is one of those 'off the shelf' apps that uses username as its primary identifier in all the audit tables, so it has to be unique within the app.
    From quux