Wednesday, January 12, 2011

how to migrate mail from argosoft to postfix/dovecot

Any ideas on what a migration strategy might be to sanely migrate mail away from the Argosoft mail server for windows? product site: http://www.argosoft.com/RootPages/MailServer/Default.aspx (note: not the .net version). We are wanting to migrate to a more standard setup using Postfix/Dovecot on Ubuntu.

Would something like imapsync do the trick? Anyone successfully migrate their mail away from the Argosoft Mail Server product?

  • Using a tool like that will get the mail moved for you, though you should do some testing to see how fast all of your mail can be migrated. What sort of downtime are your users willing to accept while everything is moved over?

    We bring in new e-mail clients all the time and move their existing mail into our system using a similar, proprietary, tool designed to work with our e-mail platform.

    One thing to watch for is if you have a web-based e-mail system with any other information stored in it (address book, calendars, etc.) then you will want to consider those as well for your migration. There's nothing like an angry boss who had all of his contacts in webmail and suddenly can't get to them on the new system.

    faultyserver : Good point, the address book is going to be an interesting item (more than likely a nice difficult proprietary mess) to migrate over. Thankfully we dont have to worry about calendars, planners, other misc 'features' to migrate, the argosoft webmail client is pretty spartan in that regard.

Unable to use stty in Mac's Zsh

Unix Power Tools -book recommends to use the following command if you do not want to get any notification of running process

stty -topstop

or

stty -topstop

Both commands give me

stty: illegal option -- topstop
usage: stty [-a|-e|-g] [-f file] [options]

How can you use the command in OS/X's Zsh?

  • This does not seem to be a zsh issue. This is an issue with the specific implementation of the command stty on your system.

    To investigate a new command, remember:

     $ man [command]

    There are also manual pages online, so if your particular system does not have the man pages installed, you can still get the needed info.

    Many modern utilities have built-in help info. Try

    $ [command] -h 

    or

    $ [command] --help

    to figure out what options the [command] you're interested in has on your system.

    I am also suspecting that you have a typo. Do you really mean "topstop"? This does not seem to be a valid option.

    From EricJLN
  • stty doesn't seem to have a -topstop option; is it supposed to be -tostop? Use

    man stty
    

    for details on the stty command and its options.

  • Before running the command, you must tell zsh to unlock your terminal:

    ttyctl -u
    stty -tostop
    
    Masi : I get the same error message. -- I need to try your code in Ubuntu. It will likely work.
    From redondos

Using a laptop as an external monitor and keyboard for a server?

Is there a way to use an old laptop as an external keyboard and monitor for my rack servers?

  • There are a couple of pieces of hardware around like this:

    http://www.iogear.com/product/GCS661U/

    But I haven't seen one wholely accepted solution. I've thought about this many, many times while cursing a crash cart that was locked in someone else's cage at my colo.

    Matt : This appears to be a Windows (2000, XP, Vista) only solution.
  • You could use Remote Desktop with the \console switch if they're both on the same network.

  • I wonder if you could leverage the mirror support in Maxivista for this?

    http://www.maxivista.com/

    From tomfanning
  • There is a KVM from Epiphan Systems that connects your laptop to the monitor/keyboard/mouse of another system using a USB based VGA framebuffer. At $399 the price may be a bit steep though.

    http://www.epiphan.com/products/frame-grabbers/kvm2usb/

    From Matt
  • If it's a Linux server, you can connect to X remotely, or tunnel it through SSH, if security is concern.

    Javier : of course, if it's a Linux server, it shouldn't use X
    Slartibartfast : well, there is no reason for it to have X started all the time, but you can, ssh to it, and then start X to do whatever you want, then turn X off (I know it's hard to imagine, but some people may prefere GUI over console, shocking, I know :) )
  • From a hardware perspective, there is no straightforward way to do that. The video ports on most laptops are outputs, not inputs, and you can't reverse that. The keyboard/mouse ports are inputs, not outputs. Most USB controllers inside your server and laptop will fight over devices connected to the same bus, so attaching them via a hub is also not practical.

    1. Use some remote desktop/vnc/X-terminal (but of course, why do you even need to be in the server room?)
    2. If your system has a serial port and a boot management processor (like iLO) you can connect directly to the BSP, which will then generally let you get a terminal session on the server. This is true for both *nix-ish and Windows systems. HP Integrity servers tend to have this capability.
    Matt : See my answer below for a method in which you can connect directly to the server using a special type of KVM.
  • Use servers with "lights out" modules. This way network is all you need and your notebook will be a nice terminal.

    From slovon
  • You could also use a software KVM with an adapter. They're few and far between.

    One such product from epiphan (Never heard of them before today's google) Product Page

    From davenpcj

Looking for mass email delivery service.

I'm looking for a solutions to deliver newsletters through a reliable service provider. Currently, I'm using Lyris List Manager software on my dedicated server to send out newsletters to opted in recipients.

We don't have the knowledge to troubleshoot issues coming down the pipe, so we're looking for a hosted solution to manage our mail delivery. Does anyone know of or have any experience with mass mail delivery services? Lyris offers Lyris HQ for a hosted solution, but I'd like to know what else is out there.

  • I think subscribe the service from the providers who do that for living would be a best approach. having it hosted on the dedicate server on your own sometime may trigger the ISP to warn you that you violate their spam policy, or something like this. It's true that mass delivering newsletters is a type of spamming from ISP's eyes.

    From kentchen
  • Campaign Monitor is pretty awesome, in my experience.

    ExactTarget is another commonly used service.

    From ceejayoz
  • My company uses Rackspace's Mailtrust. I believe the two companies recently merged, as their website now claims "Mailtrust is now Rackspace" or something similar. It works well and includes Outlook Web Access for us. I do not know if they have any other interface. I'm not the biggest fan of OWA, but it works pretty well.

  • We use Blue Sky Factory. They bought our old Mass Mailer, so we got our old rates. I'm not too sure what Blue Sky's rates are, but the interface for reporting and segmentation is GREAT.

    From Jacob
  • Mail Chimp.

    Heres a review for it by paul stamatiou when he used it for skribit: http://paulstamatiou.com/2009/03/25/review-mailchimp-email-marketing

    mhud : Another vote for MailChimp. It's very easy to set up and generally very slick.
    From p858snake
  • We use Emma. What's nice about Emma is they have a whole interface for creating and customizing newsletters on the web, and they're pretty cheap if you want them to create something new. They also have all the standard features: audience tracking, multiple lists, etc. We've used Constant Contact and Bronto in the past, and looked into a lot of others like Vertical Response and MailChimp and found Emma to be the easiest to use.

    From Mark Trapp
  • If you are specifically turning an RSS feed into an email newsletter, check out http://FeedmailPro.com

    It's a lot less expensive than solutions mentioned above.

Exchange 2003 RAID Configuration for 15 disk Array

I have a 7200 rpm SATA disk array attached via SCSI with 15 disks. I am trying to figure out what would be the best way to configure RAID for my back-end Exchange 2003. I have 4 mail stores (not including the public folders), but I am thinking 1 big raid 10 array for the stores and some sort of other raid for the logs.

Maybe 5 drives in RAID 10 with one spare, and RAID 10 over 4 disks for the logs?

With the size of the drives, space isn't really an issue, I am trying to maximize speed well still having redundancy.

Anyone have any recommendations? Or perhaps some counters I should get to make this decision?

Current Setup (I don't currently know what raid is in use with these physical levels): A Physical Array with the logical Drives: C: E: F:

  • Contains the Logs and Transactions Logs (Transaction Logs are about 1 GB a day)
  • 22 GB Mailstore (11 Mailboxes)
  • 54 GB Mailstore (108 Mailboxes)
  • 1 GB Public Folders

Another Physical Array with the logical drive G:

  • 66 GB Mailstore (90 Mailboxes)
  • 9 GB Mailstore (11 Mailboxes)
  • For best performance(*) it's a good idea to keep logs on different LUNs than anything else. Say, RAID1 sets for those. Yes, you'll be 'wasting' eight drives just for logs but it'll be fast. We used one big Raid10 array for our mail-store volumes.

  • Since you have the ability use multiple storage groups, you may want to to use multiple smaller RAID 10 stripes / mirrors so you can segregate your mailboxes and balance the load more evenly across all those spindles. Your particular RAID controller's behaviour dealing with multiple arrays is going to influence this, too.

    Ultimately, you should consider configuring the array and running Exchange LoadSim on it (http://www.microsoft.com/downloads/details.aspx?FamilyId=92EB2EDC-3433-47CA-A5F8-0483C7DDEA85&displaylang=en) to see how it's acting.

    As a quick and dirty benchmark, watching the disk queue length counter is going to tell you if requests are queuing up on the disks. Deeper Exchange tuning / design is discussed in a variety of whitepapers from Microsoft. I've never tried to keep it all in my head at once, but then again I haven't had to do a lot of very high performance Exchange installations (small offices, 200 or fewer mailboxes, etc-- nothing super performance intensive...)

    Edit

    RAID 10 sure looks like overkill for the transaction logs based on the 1GB / day growth. I would think that RAID 1 would be more than sufficient for them.

    You'll need to know about the usage patterns of the various mailbox users to figure out how many IOPS each mailbox store is consuming. I'd focus on researching the characteristics of the current server's utiliation and benchmarking the current server to get a handle on how to lay out the mailbox stores in multiple storage groups on the new server.

    I could easily see you getting two (2) RAID 10 arrays and one or more RAID 1 arrays to split the load (depending on how hard those transaction log disks are getting hit-- you really should put the transaction logs for a storage group on a dedicated spindle). At that point, it would be a matter of benchmarking to see if you get more performance out of a 6 disk RAID 10 versus a 4 disk (depending on your RAID controller, you should). You can budget your IOPS, then, based on the loads of the various mailbox stores.

  • How many users? How much space per mailbox? How big are the mailstores now?

    What are the current Exchange Servers running on? You will want the same IOPS or better when you migrate to the array.

    From Rob Bergin
  • If the array controller has the feature, you may want to allow for a hot spare.

    From Chris

automated installs vs. drive imaging

What are the pros and cons of deploying via automated installs vs. drive imaging? For Windows I know there are issues around SID generation when cloning drives. Are there any similar issues for deploying Linux via an image?

  • Drive immaging is faster, but your hardware has to be very similar for it to work. It's also harder to custimize the immage, you'd need a base immage for a web server, email server, etc. With automated installs you can have all machines install from the same nhetwork location, but use different scripts depending on what kind of server you want rather then needing to store and create multiple immages.

    From Jared
  • depends on what applications you will install and how long you'll keep the image un-updated.

    there is quite a lot of updates coming every month so even after restoring box from image you'd need to upgrade it.

    regarding SID - as far as i know it's enough if you generate unique private keys [ for ssh, https, tls eg for smtp/pop3 servers etc ] this should work fine. also unique host name generation would be nice. this might differ depending on distribution, i'm mostly using debian and didnt have any problems cloning virtual machines with that os.

    Kara Marfia : If you're joining an Active Directory domain, duplicate SIDs will create problems. Sysprep and NewSID - http://technet.microsoft.com/en-us/sysinternals/bb897418.aspx - are easy enough to use, though.
    From pQd
  • Check out this question which is very similar:

    http://stackoverflow.com/questions/398410/windows-disk-cloning-vs-automatic-installation-of-servers

    (I asked it on stackoverflow.com a while ago, when serverfault.com wasn't around).

    From Tom Feiner
  • Especially if you have disparite hardware I'd suggest automated. For windows look at unattended.

    http://unattended.sourceforge.net/

    From LapTop006
  • I disagree with some of the answers here. If done correctly you can take an Image and load it on multiple systems using different hardware. Personally I've seen images that support up to 30 different systems.

    My answer to your question is to use both methods if you are very anal about the creation of an image. Create the automated install and then sysprep the result. This will lead to repeatable, self-documenting images.

    Also, if you can write to the disk image in its saved state you can extend what is on the image by including a script that can be run during sysprep. OR you could backup your system before taking a sysprep and then extend it and take a sysprep after.

    I've done both methods with good results.

    Regarding SID problems, you should always use Sysprep for a new image (Although NewSID will work), which will resolve any SID issues, However, there are other applications which write GUIDs to the registry which would need to be cleaned. Off the top of my head Altiris and WSUS are two that do this.

    romandas : +1 - Proper imaging can be done for dissimilar hardware.
    From Rob Haupt
  • Imaging is a losing proposition. A full CentOS kickstart installation should take under 10 minutes. If your install is significantly slower, that's the problem worth investigating.

    The problem with imaging is that you have to keep a "golden" copy and update it as you make changes to your build. This means that you still need a mechanism for an unattended installation, and each change requires doing such an installation, changing the image (requiring a mechanism for automated customization for your environment), and making this copy the golden copy. If you are going to make changes directly to your golden copy, then you'll quickly end up with a mess after years of patching, upgrades, etc.

    If you must image the systems, then you should image the default build of the OS, and make your postinstall work (local customization) happen separately on each new machine. This way trivial changes to the build won't require rebuilding the golden copy.

    If your hardware is not all identical, you can leverage the installer's automated detection/configuration. I've used a virtually identical Kickstart configuration between RedHat/CentOS 3, 4, and 5, and all kinds of hardware.

    The worst result of imaging I've ever seen was a system of installing Solaris systems using a golden image (and dd with multipacks). Their installer and patching is so slow that this seemed to make sense. Unfortunately, they make completely changing the hardware of an installed system nontrivial. Each hardware type had its own golden image. A trivial build change would require making a change on dozens of disks. The second-worst was a Windows group imaging machines (again reasonable due to a crippled installer) compared to a Linux group using Kickstart. The Linux group could deploy a change to, say, DNS configuration in a few minutes. (One minute to change the postinstall, then a test build, and then a manual push of the configuration to the existing machines). The Windows group had to boot each golden image, make the change, undo the cruft caused by booting the golden image, then do a test build. (They also had to purchase special tools to automate changes to system configuration on multiple machines, to change the existing machines). The Windows group also had the option of reinstalling the golden image to make their change, but as it was a manual installation of the OS and dozens of applications, it would be slightly different each time requiring weeks of testing and risking the production systems being less identical than otherwise possible.

    Note that in both cases the Windows and Solaris setups using a golden image were not handled in the best possible way and some of the choices made by the admins involved belied a lack of competence. But starting with a design that was not reasonable did not help.

    Kickstart works so well that there's no reason to even consider doing otherwise (I have a lot of little complaints about it, but it would be a thousand times worse if it was done by imaging machines). If your installation program is something besides Anaconda, and its automated installs are less useful than kickstart, you should consider whether that distribution was really ever intended for enterprise use.

    From carlito
  • I can't really comment on the Linux side of things but with Windows I'd say there aren't too many pros for using an automated process over an image.

    There is lot of guidance available from Microsoft, here.

    The proof is in the pudding. Microsoft now use image based deployments for Vista, Windows 2008 and Windows 7. Using the new tools and process described in the link above you can deploy Windows to any HAL type (including XP), with full driver support and not a huge amount of effort.

    From Jachin
  • Deploying Windows via images is fully supported by Microsoft using Sysprep to "factory seal" the image before deploying it. Sysprep resets the SID and essentially prepares the image for a new machine.

    However, it is highly recommended (unless you are a small company) to have a fully scripted install as well, for a simple reason. Every time you need to update your image, you have two options:

    1) Continually modify the existing image and re-sysprep it every time. This will eventually result in problems as you keep patching, modifying and sysprepping the same image over and over.

    2) Recreate the image from scratch, which is vastly preferable. However, if you don't have a scripted build, you run the high risk of getting plenty of inconsistencies between builds.

    So, in summary:

    • Use a scripted build to create an image
    • Use the image for deployment

    An additional wrinkle in all this is that Windows Vista, 2008 and 7 all use an image-based install, so the gains of using image-based vs scripted install have disappeared anyway.

How to switch proxy settings via script?

There are a number of users in our organization that use their laptops on multiple networks. Each network has its own proxy setting requirements for accessing the Internet and currently, the users must manually change these settings in Internet Settings whenever moving to a different network.

Is it possible to script the changes so that the user can just run the appropriate script for the network they're on?

This is primarily for Windows XP but might also be required for Vista and Windows 7.

  • Microsoft kb 819961 is a good starting point.

    The registry settings for the browser are located here.

    [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]
    "MigrateProxy"=dword:00000001
    "ProxyEnable"=dword:00000001
    "ProxyHttp1.1"=dword:00000000
    "ProxyServer"="http://ProxyServername:80"
    "ProxyOverride"="<local>"
    

    It should be pretty easy to build a vbs or powershell script to automatically update the registry.

    Jeff Yates : Ah, I didn't even think about the registry - no idea why. Thanks!
    From Zoredache
  • Look into proxy auto-config scripts. You can script changes to the Windows registry to select a different proxy server, but you'll really like proxy auto-config scripts and how they work on your client computers.

    http://en.wikipedia.org/wiki/Proxy_auto-config

    I moved to proxy auto-config files for my school district Customer a couple of years ago as a result of administrators taking laptops off-site and trying to work on other networks that didn't need an HTTP proxy specified. It's worked like a charm, and is a nice cross-browser and cross-platform compatible solution.

    Jeff Yates : This sounds ideal - I'll look into it.
    : I will drop in and second this solution, - it really _is_ as ideal as it gets, no more registry hacking, no more manual changes. It just works.
    Jeff Yates : This worked great, by the way, once I got it set up.
    Evan Anderson : Boo-yeah! Proxy autoconfiguration is "the bomb".
  • There are a couple different ways to do this, but I would personally looking at applying a GPO that would run a logon script. This script would look at the subnet the user was in, and apply the appropriate proxy setting. This website describes the place in the registry where the change would have to be made.

    http://www.computing.net/answers/networking/changing-ie-proxy-via-login-script/22498.html

    From bread555

Logging in as another user in sharepoint

Hi. I'm site collection administrator/(physical server administrator) in SharePoint (3.0), and I'm debugging other users' rights to access some of our own features. Is it possible, in any way, to log in as another user(with his/her rights) without knowing his password? I can create my own 'dummy' user assigned to same groups, but looking in 100+ groups if user is there isn't what I want to do this evening. Thanks.

-- y

  • I don't know of any way to user an account without the password. However, you could just make a copy of the user in AD, this will retain the same group membership, and you can then set the password to whatever you like.

    All you need to do is right click on the user, click copy and then complete the details required.

    Sean Earp : That would work great, assuming the SharePoint permissions are assigned to SharePoint groups containing AD groups (containing the user and cloned user). I have seen many SharePoint installs where the users were added directly to SharePoint groups. Just something to lookout for when troubleshooting permissions in SharePoint :)
    Yossarian : In fact, I 'inherited' crappy system to administrate and develop on, and there are no groups in AD, only in sharepoint, so this isn't solution :(
    From Sam Cogan
  • Short answer - No it is not.

    Long answer - The best practice for this is to set up test user accounts in AD and SharePoint in a logical and structured way and to add this task to Admin processes for adding a new user group. This is the only way you will be able to test properly. And of course these users should really be on your test environment but I realise that a lot of companies are to either tight or stupid or both to fund dev and test environments for SharePoint so you may have to do it in live. It can be a lot of work depending on your environment but it really is the only way to see if "Tony in Marketing" really can't access the Marketing Proposals Library or if he is just a dork.

    Alternative answer: Use remote control software like CoPilot to take over Tony's computer to see the problem first hand

    @_nige MCTS SharePoint

    Yossarian : You're not true, I worked it out.
    Kevin Davis : I work on the SharePoint product team and am responsible for permissions management UI. I'd encourage this approach.
    Yossarian : I'd rather waste my time other way than trying to force our crappy implementations to work, but one evening hacking sharepoint is cheaper than spending one month rewriting our old code.
    Yossarian : (there was no intended offense in previous posts, sorry if it looked like that)
  • You cannot login as another user without password (afaik)

    Some of approaches you might wanna try are described in this article.

    However if you really want to "login" as another user to check particular permissions you might wanna try this product.

  • So. The solution is following: (not clean, but working)

    1) write own IHttpModule, containing:

    class LoginModule {
     public void Init(HttpApplication context)
     {
        context.PreRequestHandlerExecute += new EventHandler(UglyHack);
     }
    
     void UglyHack(object sender, EventArgs e)
     {
         HttpCookie wannabe = (HttpContext.Current.Request.Cookies["_sp_admin_wanna_be_user"]);
         if (wannabe != null && SPContext.Current.Web.CurrentUser.IsSiteAdmin)
         {
             SPWeb cw = SPContext.Current.Web;
             typeof(SPWeb).GetField("m_CurrentUser", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(
             SPContext.Current.Web,
             cw.AllUsers[wannabe.Value]);
         }
     }
    }
    

    2) Sign it

    3) GAC it

    4) to web.config().

    Voila! Youre the man. :) (of course i added logic to add cookie setting to menu, security, etc..)

    From Yossarian
  • Yossairian, could you please your whole solution here?

Unable to have a Vim-like keybinding in Emacs

I want to replace the command for moving between windows in Emacs to

Ctrl-t

The command in pseudo-code

(global-set-key "\C-moveBetweenWindows" 'C-t)

How can you remap the command for moving between windows in Emacs?

  • In your ~/.emacs file, include the following line:

    (global-set-key "\C-t" 'other-window)
    

    This will set Ctrl-t (C-t) to move to the next window just like the C-x o key sequence. This will replace the transpose-chars key binding, which you could set to something else if you wanted.

Mac OS X machines - VERY slow access to Windows shares

I have a handful of mac boxes accessing a share from a remote Windows Server 2003 box over a site-to-site VPN. They are connecting to the share using cifs, authenticating with AD credentials, and performance is absolutely pathetic - think waiting 5+ minutes to open/copy/move shared docs, even small ones <100Kb.

I am relatively new to this situation but it has been ongoing for quite some time before I took over. For some further background, I can access the same files from Windows machines on the same LAN as the Macs as fast as one would expect for the situation. All of these machines are on a Cisco Catalyst switch behind a Cisco PIX firewall (which provides the site-to-site VPN access). Ping responses from Mac boxes and windows boxes to file server are about the same: 6-7ms.

Has anyone experienced problems like this accessing windows shares from Macs? Is this a protocol issue? Thanks for any input.

  • My gut says you might be having an MTU issue on your VPN. Path MTU discovery is supposed to work around this, but there can be misconfigurations of networking gear that make it not work right.

    I don't know what the specific PING syntax is on OS/X, but on Windows you can send a PING from the server to one of the clients with the syntax:

    PING <destination> -l <length> -f
    

    This sends a PING packet with the specified length to the destination with the "do not fragment" bit set. You should be able to move packets with a length of 1472 between the client and the server, unless there's a connection between you with a smaller MTU.

    Have a look at this article from Microsoft for some background: http://support.microsoft.com/kb/314825

    Do you see any issues with other protocols running over the VPN, like HTTP or FTP? CIFS isn't the best performer over highly-latent links, but the times you're talking about are outside the ballpark of "normal" CIFS suckage.

    Froosh : +1 Just recently had similar issues with an MTU blackhole that did not sent ICMP Frag Required messages for oversized packets.
  • To diagnose a possible MTU issue the ping syntax in OS X would be

    ping -D -s packetsize destination
    

    Where packetsize is the number of data bytes to send MINUS the 8 bytes for the ICMP header (ie: the default packetsize is 56 which is 56+8=64 byte packets sent).

    From Brad

what ports are used by ftp over ssl?

Possible Duplicate:
What firewall ports do I need to open when using FTPS?

Trying to open up ports in a sonicwall firewall. The service is ftp over ssl (NOTE: NOT sftp). What ports does this service use? I have tried standard ftp port as well as 989 and 990.

Also, What other troubleshooting tips might one suggest? I am a netcat nub, so any hints as to how to use that tool would be appreciated as well. thanks

  • Because FTP utilizes a dynamic secondary port (for data channels), many firewalls were designed to snoop FTP protocol control messages in order to determine what secondary data connections they need to allow. However, if the FTP control connection is encrypted using TLS/SSL, the firewall cannot determine the TCP port number of a data connection negotiated between the client and FTP server.

    Therefore, in many firewalled networks, an FTPS deployment will fail when an unencrypted FTP deployment will work, but this problem can be solved with the use of a limited range of ports for data and configuring the firewall to open these ports.

    via Wikipedia ... http://en.wikipedia.org/wiki/FTPS

    From Matt
  • As far as I remember in active mode it uses the same ports but first STARTTLS method is run.

  • You will certainly have issues with FTP/SSL in either passive/active mode if your firewall rules are too strict.

    On active mode, you only need to open ports 20/21 inbound and keep the state to outbound, but it will not work well with many users, but you don't need to worry about using ftp-proxy tools or anything.

    The passive mode will not work well with SSL, unless you keep every port > 1023 open :)

    The best way is to use SFTP (included with ssh). Most ftp clients support it already and you only need port 22 open.

    From sucuri
  • I was once greatly embarrassed by recommending FTP over SSL, assuming that the protocol had solved the design issues that plague FTP since the encryption would make them unsolvable. Instead, the encryption makes it impossible for a firewall to handle them!

    FTP over SSL is sadly a useless protocol in the real world, where both ends will have a firewall in the way.

    From carlito

Any antispam software beside SpamAssassin?

What other antispam software (server side) is available besides SpamAssassin?

  • Uh, yeah, there's a couple things.

    From chaos
  • I usually don't recommend server side spam filtering. By using server side filtering, you're still allowing all the spam to flow to your site, consuming your bandwidth. Look at external services like Purity or Postini which keeps the spam off your network and can often do as good or better job filtering it.

    Matt : My employer uses Postini and its been fairly good at keeping spam out of my inbox. YMMV
  • ASSP (Anti-Spam SMTP Proxy) is

    an Open Source, Perl based, platform-independent transparent SMTP proxy server available at SourceForge.net that leverages numerous methodologies and technologies to both rigidly and adaptively identify e-mail spam.

    As of now (June '09) it appears to be under active development.

    From Adam
  • Will all my heart I recommend DSpam. It's more "aggressive" than SpamAssassin, that means false-positives do happen when you start to train it, but I'd rather have to deal with couple of false-positives than a load of spam that did go by unnoticed.

    When I was training the filter for the first time, I had to feed it with about 30 spam messsages. After that, each and every unsolicited messages had been recognized properly for more than a year now.

    PS. I'd also recommend to use some greylisting software.

    From therek
  • All things come around to their origins. The hottest thing in anti-spam these, at least among the commercial software products, days is IP reputation. Kind of like an RBL, but with more return codes. We're running Symantec Brightmail and I recommend it highly. Of the bad messages it processes, right now 97% of those messages are terminated with IP Reputation faults of the incoming connection's IP address, the remaining 3% are actually scanned for content. This is a fairly recent change, and has seriously reduced the CPU loading on our appliances.

    IP reputation is something that can only be done at the receiving MTA, not the local client.

  • We used Postini and the service has been very good.

    From
  • We use a Barracuda Spam & Virus Firewall. It works great at blocking spam and has the bonus of scanning attachments for viruses and spyware.

    From notandy

How do I flush IIS Cache without restarting web server?

I have a web site IIS 6.0 that places data into the cache. Sadly it hasn't had a expiration set on it. Is there a way (utility or command) to force this cache to be cleared without rebooting the machine or restarting the web server?

I've already tried restarting the application pool without success.

  • You can do it with some ASP.NET code:

    foreach(DictionaryEntry entry in System.Web.HttpContext.Current.Cache) {
        System.Web.HttpContext.Current.Cache.Remove((string)entry.Key);
    }
    
    From chaos
  • I use iisreset from command line but this restarts the IIS admin service and all dependent services. Which my not be to your liking.

    However, it cleanly clears all cache, App Pools and .net cache too.

    From gbn
  • Can I ask why an iisreset isn't possible? The few seconds that it takes shouldn't be noticeable to your end-users. You could schedule it for a quiet period of the day to have the least affect.

    From Lazlow

Connecting with Samba to a Windows Share returns "NT_STATUS_DUPLICATE_NAME"

I have set a shared directory on my Windows machine, and given full control permissions to username@workgroup.

When I try to connect to the Windows machine with Linux using smbclient, I get the error NT_STATUS_DUPLICATE_NAME. Here is the transcript:

$ smbclient -U username -W workgroup -L //windows-machine
Enter username's password: 
Domain=[workgroup] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]
tree connect failed: NT_STATUS_DUPLICATE_NAME

If I intentionally enter the wrong username, password or workgroup, I get a different error: NT_STATUS_LOGON_FAILURE. So it seems like I'm getting the other information right.

I put an entry in /etc/hosts that points windows-machine to its IP address. The NetBIOS name of the windows machine is something different.

Does anyone know what this error means?

  • You're probably getting that error because the Windows machine doesn't understand itself to be identified as what you're connecting to it as. (Using a wrong auth information changes the error because this issue doesn't crop up until later in the connection process.)

    Try connecting to it as its IP number, not windows-machine. If that works, it confirms that the name thing is what's going on, and you can resolve it either by making the PDC understand itself to be windows-machine or by just using the IP number.

    From chaos
  • You can't use either the name of the machine in /etc/hosts, nor probably the name from the DNS server.

    You must use either the machine's IP address or the NetBIOS name of the machine specified in Windows.

    To find the NetBIOS name in Windows XP:

    1. Right click on "My Computer"
    2. Click "Properties"
    3. Click the "Computer Name" tab
    4. Read the "Full computer name" field up to the first period '.'
    From Neil
  • The NetBIOS name of the windows machine is something different

    That's your problem. It's easily fixed by a registry hack on the Windows machine. See http://support.microsoft.com/kb/281308 for the details.

    JR

    Neil : I wonder why this isn't just fixed in a patch.
    John Rennie : It's not a bug, it's deliberately designed that way. Possibly for security, though I'm not sure why precisely. I suppose it stops you accidentally connecting to the wrong server if you have rogue entries in the hosts file or duff DNS. Personally I put the registry hack on all of my servers.

Puppet master and resources graph

Hello !

I've setup rrd reporting + graph on my puppet master, my nodes report as expected and I can see the 'changes' and 'time' graphs, but I miss the 'resources' (html and daily weekly monthly yearly graphs) elements.

Note resources.rrd files are there, just puppetmaster does not generate the html and png

  • Your best asking this very question on the puppet mail list as there is a very active and helpful community there.

    wzzrd : Pointing to a mailinglist does not constitute helpfulness.
    From _lunix_

Offsite nagios?

Nagios is great for self-hosted service monitoring in an intranet, but what about offsite monitoring? Does anyone sell a network service availability service that goes beyond ping and HTTP?

  • Plenty of places will do off-site monitoring. Typically you will see Solarwinds Orion, NetQoS Netvoyant, and Zenoss offered before Nagios though(at least from what I've seen offered).

    From sparks
  • I just rent a VPS for ~$30/month and run nagios on it as well. Seems to work great.

    From trent
  • http://www.serverdensity.com is a hosted application to do just that - CPU load, memory usage, process breakdown, etc.

    (This is a product from my company)

    jldugger : I'm more interested in network availability, which means a script running on a remote host rather than local to the network. For example I have SVN configured to run via xinetd; it won't show up in /proc unless someone is actively using the protocol. NagiosExchange has a check_svn script that sufficiently handles this concept.
    From DavidM
  • I use a combination of munin + nagios :

    munin on all my offsite nodes, check_munin_rrd.pl on nagios (granted nagios host can read the generated rrd files)

    check_munin_rrd.pl reads the rrd munin generates and alert you if anything goes above a threshold, so you can monitor anything that munin can see (cpu, load, network) It's not real time (data gathering every 5 min). For real time, you could also have a look on a regular snmp solution but it's a bit tricky in my opinion. In a sense munin-node becomes your snmp agent.

  • Honestly I would look at using nagios on an offsite host (like http://www.slicehost.com/). Install it there securely, then allow for a tunneled ssl link back into your main nagios system. Then you can have a single place to monitor all of your systems internal and external.

    From bread555

What's the difference between .cmd and .bat files?

Hello, everyone!

Just curious. "Cool" people in our company always use *.cmd while no one was able to explain the difference to me.

  • In theory .cmd is more "trueЪ" :) because .bat is a script for the old DOS command.com while .cmd is for cmd.exe from Windows NT, the last one has a little more improved scripting. In the real life usually both are equal, like writing /bin/sh or /bin/bash in Linux (mean distros where sh is actually bash)

    From disserman
  • There are semantic differences in the command language used for each (.bat files get a compatibility version). Some of these can be illustrated by this script from over here:

    @echo off&setlocal ENABLEEXTENSIONS
    call :func&&echo/I'm a cmd||echo/I'm a bat
    goto :EOF
    
    :func
    md;2>nul
    set var=1
    

    Basically, .cmd files get the current, sexier version of the MS command language, which is why the cool kids use them.

    grawity : <3 obfuscated cmd scripts :)
    Evan Anderson : Yeesh! I stand corrected. I never knew that behaviour of CMD.EXE before.
    From chaos
  • Here is a good discussion from Stackoverflow.

    Peter Mortensen : Yes, in particular the answer starting with "Here is a compilation of verified information", http://stackoverflow.com/questions/148968/windows-batch-files-bat-vs-cmd/149918#149918
    From squillman
  • According to Wikipedia:

    .bat: The first extension used by Microsoft for batch files. This extension can be run in most Microsoft Operating Systems, including MS-DOS and most versions of Microsoft Windows.

    .cmd: The newer .cmd extension is described by Windows NT based systems as a 'Windows NT Command Script' and is helpful, as using a .cmd extension rather than .bat extension means that earlier versions of Windows won't know how to run it, so that they don't try to and mistake the commands for COMMAND.COM style files and fail to run the new style commands due to the lack of command extensions, resulting in scripts only being partially run which could prove damaging (for example; failing to check the successful copying of a file and then deleting the original anyway).

    The only known difference between .cmd and .bat file processing is that in a .cmd file the ERRORLEVEL variable changes even on a successful command that is affected by Command Extensions (when Command Extensions are enabled), whereas in .bat files the ERRORLEVEL variable changes only upon errors.

    Hope this helps.

    From KPWINC
  • I first saw the .cmd format used under OS/2. If you're thinking in DOS terms, it's like a .bat file on steroids. .bat files were introduced first under DOS type OS's. Alot of the syntax is similar except when you begin to get into advanced functions. Also, a .cmd file has the potential to not work in a 16-bit environment (win98) whereas a .bat file will probably work in all environments.

    From Pete