Friday, January 28, 2011

Is it safe to store data on a hard drive for a long time?

Is it safe to backup data to a hard drive and then leave it for a number of years?

Assuming the file system format can still be read, is this a safe thing to do. Or is it better to continually rewrite the data (every 6 months or so) to make sure it remains valid?

Or is this a stupid question?

  • I want to say probably (as long as you keep it away from magnets ;-), but I'm not sure. For long-term storage I would transfer the data to some archival format like a DVD - I think that like CDs, they're supposed to last 100 years. You can still keep the HD around for easy access, of course, but the DVD gives you a reliable backup.

    Jeremy Huiskamp : Just curious, but can you point to studies that demonstrate the long term reliability of optical media? Are there certain materials/brands that last longer than others? I have heard (but not experienced) that regular consumer-writable optical disks can become unreadable in a matter of years.
    Martin C. : DVDs (especially the ones you use for burning yourself) have lower life-expecations as HDDs.
    patjbs : Most CD/DVD lifetime estimates I've seen point to more around 10-15 years, and that's in optimal conditions. Non-optimal storage of optical storage media tends to decrease their lifetime dramatically. And I've seen this reflected in my observations.
    Eddie : Backing up a 500GB drive to DVD is pretty painful. Optical media may or may not last as long as a hard drive, depending on many factors.
    duffbeer703 : There are "archival grade" writable optical media, but the actual lifespan is questionable AND require very specific environmental conditions.
  • I wouldn't trust important backups to any single device for any significant length of time.

    I've had plenty of CDs that couldn't be read after a while. (Cheap ones, admittedly, but I'm leary of the longevity claims made.)

    I've had hard disks silently corrupt data.

    I seem to remember I've even had SSD failures, although with a low number of writes I'd expect them to be pretty reliable.

    Aside from all of these things, using a single copy means you've got no protection against physical disasters: fire etc. If you have multiple copies, you can separate them physically. Ideally I'd take some number (e.g. 3) of copies and run a checksum (I usually use MD5) periodically over everything. If one of the copies becomes corrupt in some way, if you've got multiple other copies you should be able to trust the majority, and create a new backup to replace the corrupted one. (Of course, if you keep the correct checksums in a separate place, you could trust even a single backup which still gives the right checksums, as the canonical source for replacements.)

    Of course, how much trouble you go to depends on the value of the data. My personal home data is only backed up on a RAIDed NAS. My work data is in Google datacenters, which I trust fairly strongly :)

    From Jon Skeet
  • I would say you should recycle the media every other year or so - that is, replacing the drive, disc or tape with whatever there is to replace it with and keep more than one copy.

    Few things lasts forever, optical media can degrade rapidly depending on quality, method of writing to it and environment where it's stored. Mechanical parts can always fail or there could be bugs in the firmware that is related to time or to wear and tear.

    I've pondered over your question often, it would be convenient with something that is guaranteed to stay working for say 5 years. There's tapes and other form sof backup media rated for 10 or more years but I'd never trust that, at least not without a decent amount of redundancy (several copies on different batches).

    Keeping the data fresh and continually recycled seems to be the reliable way to go - that way you get to test it regularly as well.

  • From the article

    What advice do you have for long-term storage of disk drives and other media?

    Keep your hard drives in a climate controlled environment within an acceptable temperature and humidity range. Also, protect the drives from electrostatic discharge (ESD) and vibration -- this is normally done in their packaging, but it's important to prevent ESD, physical shock and excessive vibration when the drives are removed for storage.

    All magnetic storage media has a finite life because magnetic fields start to decay as soon as they are written. This means a tape or drive will not retain its data forever. In a proper storage environment, it's reasonable to expect that the drive should remain readable for up to 10 years.

    The concern is more about the drive's mechanical reliability; will it physically spin up? After a very long period of disuse, the spindle bearings or head actuator may be stiff, resulting in read/write errors. These considerations are particularly important for long-term archival storage systems, as well as the new class of removable hard-disk drives that are now appearing from ProStor Systems Inc., Imation Corp., Quantum Corp. and Iomega Corp.

    From splattne
  • HDDs have actually quite high life-expectations, at least from the magnetic side (setting external magnetic fields asside). The main problem with them is, they could eventually suffer mechanically, i.e. not spin up if they are not used regularely, as some oils and coplings could become a problem.

    The safest approaches to really long-time storage in my opinion are:

    • stream to one or more magnetic tapes
    • print to paper and/or micro-film
    • keep copies on operating (running) HDDs distributed over several physical machines and locations
    • use an additional external backup space like Amazon S3

    Optical media, especially the ones available for consumer use have unexpected low life-expectancy. You should at least check the quality of the raw data read every two years. You could have lost data in the meantime, though.

    EDIT: An important aspect in this case is also that you should add checksums to the stored files (MD5, SHA1, etc.), so you'd be able to realize that some corruption occured (or not).

    From Martin C.
  • They will store data safely for a few years but you would be better to copy them every two years or so yes.

    From Chopper3
  • Given your other options for backup, HDD is the safest way to go. Other options include Magnetic Tape, SSD and optical media.

    Let's examine the pitfalls of each:

    MT: More prone to erasure when exposed to a magnetic field than a HDD. Readers are also becoming harder and harder to find. You don't want to come back in 5 years and find that there's no way to remove the data from your medium.

    SSD: Reliable in that there are no moving parts. They are prone to electrical degradation after several read/write cycles which is troublesome and potentially dangerous. The likelihood of losing data while the drive is not in use is slim, however.

    Optical Media: The least reliable of the bunch. They're extremely prone to physical degradation (bending/warping) and it requires very little to throw them out of their deflection spec. Further, the encoding scheme used to write data to most optical media is rather complex, creating a greater likelihood of single element failure leading to unreadability.

    HDDs: Solid, sealed devices. Can be damaged by physical shock more easily than most of the above devices. Has precise mechanical parts that can lead to failed read/writes if damaged.

    The benefit of HDDs, however, is that they ARE sealed. All of the moving parts are stored in an air-filtered enclosure. The magnetic stability of the bits on the disk is quite high and unlikely to change.

    Further, if the mechanical parts fail, it is possible to have the platters removed and the data recovered from them directly.

    There's no perfect option, but of the imperfect ones, HDD would probably be your best bet.

    From ParoX
  • I've had hdds fail while "in storage", i.e. sitting in a climate-controlled room for a few years that, when called into duty again, refused to spin up or be booted from.

    So no, I wouldn't say that this is a particularly good idea. As others have said, as part of a blunderbuss strategy it is one way of keeping a copy of your data, but it probably shouldn't be your only one.

    From Lunatik
  • One additional suggestion is that you should also always move data to the current format technology as soon as the previous technology is nearing its end of life. For example, currently I'd suggest moving away from IDE drives, as computers are starting to ship without IDE connectors and controllers.

    Similarly in the audio/video archiving we have moved from Cassette (VHS) to CD (or LaserDisc) to DVD, and now to flash storage.

    You might keep a USB-to-IDE adapter around, but along with regularly recycling your data between storage devices and locations, you should also keep in mind moving the data to current technology so as not to wait 10 years to find out you can no longer access the data on that 5.25" floppy disk.

  • If you want your data to survive or any period of time:

    • Use tape if access is infrequent. Follow environmental guidelines and do the homework to determine how often you need to rotate the media.
    • Use disk if you need access to the data. The disks should be "active". A disk in a closet is likely to either fail or get thrown out a few years down the road.

    Using a third party provider is another alternative. Something like Amazon S3, Mozzy or a similar service gives you an ultra-low cost way to store stuff.

  • Do not store your hard drives for any length of time. They are designed to be on. If you don't let the HD spin up every now and then, they will go bad. I'm talking months or a year here.

    They will break if not used. MTBF is "guaranteed" for drives in use, not in storage.

    From Thomas
  • It sounds like you're not so much concerned about hardware failures, but file corruption and bit rot. In this case, ZFS is your best ally. If data preservation is your goal, consider using RAIDZ2 if you can afford it, or at least RAIDZ1. RAIDZ is comparable to RAID5, except it uses a variable stripe width to eliminate the infamous RAID5 write hole. This is especially useful with a cheap NAS, because power failures likely won't corrupt the array. The file corruption and bit rot are taken care of by real-time disk scrubbing, in which the data is constantly being checksummed to verify it's accurate. Those are just the tip of the iceberg with how ZFS is THE filesystem of choice.

    If you want an easy NAS setup at home with ZFS built in, check out http://freenas.org. The latest release candidate includes ZFS, and it's not that hard to set up.

    It will be interesting to see the long term results of switching to ZFS simply for data preservation... it's too new at the moment. However, the facts are all there, and it's a no-brainer: the best file system for data integrity is ZFS.

    From churnd
  • Blu-ray seems to be nice solution for this problem.

    From m1k4

Hide users when connecting to Windows Server 2008

If I RDP to a Windows Server 2008 box without proving any username or password information I get to see a list of the users on the computer. In Windows Server 2003 this list was not broadcasted. How can I make WS2008 not advertise what users are on the system? Thanks.

UPDATE: More specifically, this is Windows Server 2008 Web Edition 64 bit.

  • I think this reghack still works in 2008.

    I'm curious as how to reproduce this, I cannot get the RDP client in Vista to connect to a Server 2008 at all without first providing the login credentials manually... the server's I've tried are all in a domain though. Perhaps the old RDP client does that though?

    pbz : I don't have that registry key in WS2008 (specifically SpecialAccounts\UserList). A simple way is to connect without providing the password or provide the wrong password. After you click cancel you'll see a list of possible users much like in the Vista screen shot in that article. I can't believe they even have this for a server OS.
    Oskar Duveborn : You can just create the key. I do not get such a list, the RDP dialog just rethrows the credentials box if I pass it the wrong ones - I never get to actually init an RDP Window without a correct username and password. If it's a domain thing or if it's just that I use the latest RDP client (think that's it) I dunno ^^
    pbz : I know the article claims it works for WS2008, but for me it doesn't. I followed the instructions and triple checked, but they don't have an effect. Not sure why it behaves differently for you. I use Windows Server 2003 to connect with RDP v6. Are you using NLA by any chance?
    Oskar Duveborn : Hmm NLA could be it, yes. I'm afraid I don't have any more ideas to the original problem though - atleast not until I've had time to try it from a 2003 server outside of this environment :/
    Zoredache : @I'm curious as how to reproduce this -- login via rdesktop from a Linux box.
    pbz : For now, as a workaround, I decided to rename the Administrator account (I was planning on doing that anyway). If I rename the Administrator account to let's say XXX on the login screen I can still see the "Administrator" user, but you can't login if you just provide the password. You don't get to see XXX though. Looks to me like they hardcoded in the UI expecting to always have an account called Administrator. If I switch to "Other User" and type XXX and the password it works. Pretty stupid IMO.
    pbz : Well, it turns out this doesn't really work. After a reboot I see XXX as an option.
    pbz : And that, seeing XXX as an option should've clued me in... I feel so stupid, I'm gonna go and sit in the corner now. Thanks for your help.
  • This was written for Vista, but it works fine on my Server 2008 development server:

    "This is possible via the Windows Local Security Policy Editor, or “secpol” tool. To launch the Local Security Policy Editor click start, Control Panel, System Maintenance, Administrative Tools, local Security Policy. Click “Continue” to the prompt presented by the User Account Control. If you are not presented with one, it's fine, just move on.

    In the Local Security Policy editor you will see two panes, one on the left with tree-view navigation and one on the right which will have the actual definitions and items to edit. On the left hand side, expand (either by clicking on the arrow or double clicking) the "Local Policies" section, and then click on "Security Options". On the right hand side, scroll down until you see "Interactive logon: Do not display last user name". Double click on this entry and you will be presented with a dialog box that has two options - "Enabled" and "Disabled", with Disabled being selected as default. Change this setting to "Enabled", and then click on the OK button.

    After double clicking “Interactive logon: Do not require CTRL+ALT+DEL” select the Disable option and hit OK. Next, close the Local Security Policy editor, as you are done. Log off. When you are prompted by a request to press CTRL-ALT-DEL do so, and you will get the classic style logon screen you have been labored so hard to achieve."

    There also seems to be another way. I have not tested that one.

    Oskar Duveborn : +1 I look forward to a report by the pbz if this works... it's interesting if this option somehow how become enabled ^^
    pbz : I actually came across this setting while searching the net. Unfortunately it doesn't have any effect :( I have "Do not display last user name" to enable and "Do not require CTRL+ALT+DEL" to disable. Thanks.
    pbz : The second link seems to deny the ability to login, which would lock me out of the box :) I'll make a new account and try that as well; I'll keep you posted.
    David Collantes : I will keep looking as well 'til I find a proper solution.
    pbz : OK guys, sorry about the storm in the teacup. Please read the updated comment on the top of the page. It actually occurred to me what was going on when I tried to login with a random non-existing account and notice it was displayed in the list. My guess is that their intention was to save me time for retyping the last username, but in my case it was a huge time waster until I figured out what was going on. Thanks for your help!
  • I misunderstood how the logon process works. Please read the question comments for details.

    From pbz

As a system administrator, what Firefox plugin helps you do your job?

I know that there are several Firefox plugins that are invaluable for development. What plugins exist that are useful for system administration, monitoring, and the like? What plugins make your day-to-day job as a system or network administrator easier?

  • Foxyproxy - I cant stand getting attacked by others when they walk over with their stats and say "Hm this Serverfault.com you wasted 5 minutes of our time on it."

    Portman : Wish I could upvote this more.
    Nick Kavadias : this looks more like a tool that would give sysadmins work to do! Maybe give it to all the end users as a ploy to get better funding for network security projects?
    From Shard
  • Xmarks, cause who wants to maintain a local only copy of their Firefox bookmarks. Xmarks will sync your bookmarks across Firefox, IE etc on all your computers. It's fast and stays out of the way.

    Christopher Galpin : Now when will it sync my Chrome bookmarks? :(
  • LastPass. So I don't have to remember the loads of accounts I use (and can generate very strong passwords for each of your accounts), both personal and at work.

    From Ivan
  • ShowIP - allows me to see quickly the IP address of the server where a particular website is hosted. Assists in managing my many clients websites.

    DNSCache - quickly disable/re-enable Firefox's builtin DNS Cache, particularly good if your also manipulating the site's DNS at the time

    ScreenGrab - particularly good at capturing that error and sending to the developers.

    Milner : +1 for screengrab, makes life a lot easier!
    From Quog
  • If you're using amazon ec2; elasticfox

    From xkcd150
  • Even for IT, I'd have to put FireBug at the top of the list, too much good information in there.and

    From WaldenL
  • Firebug and YSlow! FTW

    However I don't see why a web browser is a crucial tool for sysadmins, curl ? wget ? telnet host 80 ?

    Matt Simmons : I admin vmware server 2.x. If I don't run a website, I've got to use the 1500 character-long command lines.
  • WebMail Notifier - Tracks web email accounts

    Live http headers - great for trouble shooting websites

    Ghostery watch the websites that watch you

  • ReloadEvery - so I can get SO refresh automagically while working!

    AdBlockPlus - because so many sites have ads that I don't care about.

    Mentioned before, but super +1 for Firebug and YSlow because there's typically some good information that can be gleaned depending on what issue you might get roped in to.

    From Milner
  • Greasemonkey and the many scripts available for it is all I need.

  • Tamper Data this is handy when you have to examine HTTP headers. This may be necessary if you have virtual hosts in a hosting environment. We also insert a field in the header to identify web machines in some of our web farms to identify problem hosts.

    From mryan1
  • Charles Proxy with Firefox plugin is much better than Firebug network statistics.

  • Delicious Bookmarks to sync my bookmarks across machines.

    spoulson : Yes! Yes! Yes! It's like a catch-all of all the useful content I've come across, tagged and sorted.
  • You can't work without something to give you rhythm

    From Dani
  • From talonx
  • Nagios Checker is pretty nice.

    From jwiz
  • Fireftp for when you don't want to install an ftp client

    From Bourne
  • In addition to others already mentioned, I find SQLite Manager invaluable.

  • Update Scanner - For every site you have keep tabs on that doesn't have email notifications or rss.

    If you ar using google apps:

    Active Inbox - GTD for mail.

    From Erik

How can I map a VMS directory on a Windows 2003 Server?

Is this even possible without compromising the server's security?

  • Samba is the canonical system used to do this on Unix-like systems and did support VMS at one point. However I'm not sure that it still does in the main trunk - you could try an older version. Directories can be shared from the VMS server via the samba server or mounted off a Windows machine via smbmount.

    If you don't want to compile your own version, HP maintains a port of Samba for VMS, which can be downloaded here.

    Alternatively, you could see if the Pathworks32 client will run in Win 2k3 server. Lastly, you may be able to just upload/download files via FTP.

    Keng : rats...they aren't going to let me install anything on the VMS side.
  • Pathworks for OpenVMS (if installed on VMS) provides integration with active directory. Security can be controlled with inter-domain trusts (Windows side) and ACLs with HOSTMAPs that map active directory names to OpenVMS accounts. You could limit the capabilities of the relevant OpenVMS accounts to match any security policy.

    OpenVMS files can have varying record formats, and not all of them can be easily translated to Windows files. Pathworks does a reasonable job translating record formats, but for some files the result is not usable.

    Care should be taken with file version numbers, as the Windows side can see only the file with the highest version. When a Windows client deletes a file from a shared directory, an older version can appear and confuse the client.

    From gimel
  • You could try setting up the OpenVMS system as an NFS server and the Windows 2008 server as an NFS client. The biggest problems with this approach are the fact that OpenVMS has versioned files (so that deleting a file only deletes the latest version) and the fact that OpenVMS filesystems are case-insensitive.

    From what I've read, OpenVMS NFS is also very picky about what it will accept; anything off kilter will cause it to reject the NFS traffic.

    With OpenVMS 8.x, HP TCP/IP is included - as is NFS.

    From David
  • Whether it's Samba or NFS, nothing in the Windows world is likely to compromise the VMS system - the chances of someone leaving malware on the share that could hurt the VMS system are insignificant.

    Or did you meant the other way? That something about the mapping could affect the Windows server? In that case, there'd be nothing more risky than any other drive you mapped on the Windows server.

    From Ward

Printer MAC Addresses

I have three printers that I have to put on our internal network using the printers' internal NICs. They are all currently attached to Windows XP machines and shared via the OS.

So I need to submit details to the networking folks to get IP addresses for each printer.

How do I find the MAC addresses of these printers (they are all HP LaserJets)?

Edit:

Printers:

  • HP LaserJet 3050
  • HP LaserJet 1600
  • HP LaserJet 2420

All are attached via USB currently.

Edit:

None of the printers currently have IP addresses. They all have the capability but currently are not setup.

  • There should be a menu choice to print configuration (on the printer itself). Other than that (or using an application that might have come with the printer for configuration, that might report a MAC address), I do not know.

    TStamper : he doesn't have the IP address so how will he ping
    Berek Bryan : TStamper is correct no ip addresses
    David Collantes : Berek, you should reword your entry. You wrote "I have three printers that I have to but on our internal network using the printers' internal NICs." But later you say they are attached via USB. I assumed, by your first sentence, that there were IPs assigned.
  • Print a test page with the network configuration from the printers. If they have a NIC in them the test page should display the mac. I believe LaserJets also have the mac printed on one of the physical labels on the case. You might have to open a drawer or lift a lid somewhere to find it, depending on what model LJ.

    Berek Bryan : i tried the test page with no luck. i will scourer the case for the addresses physically on them. thanks for the ideas.
    squillman : Are you sure there's a NIC in them then? The configuration page should say something about a JetDirect if the printer is seeing them. You should also get a page kicked out with the JetDirect configuration if the printer recognizes the card.
    Berek Bryan : yes...all there have ethernet jacks
    David Collantes : Squillman, he doesn't know what he is talking about.
    squillman : The other thing you can do is download the JetDirect software, plug the printers into the network and have the software find them. You don't need IP to do that. Other then that *shrug*
    Berek Bryan : @squillman thanks for the feedback and thanks for not being a jerk about it.
    From squillman
  • When you did a 'test page', did you do it from the client machine or the printer?... if you did it from the client machine, I doubt you'll get good information about network settings of the printer (specially if you're connecting through USB).

    Doing a quick google you can find the manual for your 3050, check for others:

    Network configuration page
    The Network configuration page lists the current settings and properties of the all-in-one. To print the Network configuration page from the all-in-one, complete the following steps:

    1. On the control panel, press Menu.
    2. Use the < or the > button to select Reports, and then press .
    3. Use the < or the > button to select Network report, and then press .
    Berek Bryan : thanks totally missed that on my initial google search
    From l0c0b0x
  • HP LaserJet 3050

    page 271

    http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c00495173/c00495173.pdf


    HP LaserJet 2420

    page 85

    http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00224567/c00224567.pdf

    From Joseph
  • I would try a broadcast ping to 255.255.255.255 from a linux box (Windows does not allow this) given they are on the same switch. Then have a look at the arp-cache for any responses using

    arp -a
    

    The arp cache lookup will also work if you can somehow connect to the device using a configuration tool (even if does not show you the MAC address).

    From Martin C.

How to remote a single application, rather than the entire desktop

I would like to run an application on one Windows machine (2008 server preferably, but any platform is okay) and display the UI on another Windows workstation.

I'm not sure what blend of technologies I need to do this. I've looked at MS Application Virtualization 4.5 (formerly SoftGrid), but that's not quite what I'm looking for. I don't want the app to run on the local host, just a remote interface. All app CPU activity and network activity needs to be localized on the remote host.

I know it's dissimilar technologies, but think "Parallels" for the Mac... is this even possible with Windows Remote Desktop/Terminal Services?

  • Sounds like you need Terminal Services Remote Apps, which is in Server 2008. As the technet blurb says:

    With Terminal Services, organizations can provide access to Windows®-based programs from almost any location to almost any computing device. Terminal Services in Windows Server® 2008 includes Terminal Services RemoteApp (TS RemoteApp). You can use several different methods to deploy RemoteApp programs, such as Terminal Services Web Access (TS Web Access). With TS Web Access, you can provide access to RemoteApp programs through a Web page over the Internet or over an intranet. TS Web Access is also included in Windows Server 2008

    Im not sure if a web interface is good enough for what you need, or if your looking for something more integrated. See here for details on TS Remote Apps and here for details on how to do it.

    David Collantes : Actually, I will point him to this instead: http://technet.microsoft.com/en-us/library/cc753844.aspx as he isn't setting up TS RemoteApp just yet, he is just asking for information on the technology.
    Sam Cogan : good point, thanks. Have updated the post.
    Simon Gillbee : Thanks for the info. I will play with this today and see if it meets my needs :)
    From Sam Cogan
  • Citrix does this (seamless window) if you want to pay for it, or as already answered - this is possible as part of the Server 2008 and onwards without Citrix.

    You can even package this as an msi and do a very simple policy deploy of it to user's start menus ^^

    There are also a few hacks out there to do this with the old Terminal Services and some viewport cropping... I haven't tried them though.

    seanyboy : Citrix does do it, but with Win2K8, you don't need Citrix.
    Oskar Duveborn : Yeah I wrote that, but I've tried to clarify my hopelessly vague sentence now ;)
  • If you want your users to run an application on a server with the GUI on a client, then they can do this using Server 2008. Install Remote Applications. For the client side to work, they need to be running XP service Pack 3 or Vista Service Pack 1. For printing over the internet, you'll need Vista Service Pack 1, or XP Service pack 3 with .NET Framework 3.5 installed.

    Simon Gillbee : What is "Remote Applications". Are you talking about Terminal Services RemoteApps like Sam's earlier answer. Or something else.
    seanyboy : re: "Are you talking about Terminal Services RemoteApps." Yes. I am.
    From seanyboy

Using "Run as..." as limited user to modify network connection settings?

I'm running in a non-admin account on my development workstation, using "Run as..." for all things that need administrator privileges. Thankfully under XP even the control panel applets allow that. This doesn't seem to work however (or I simply haven't found out how, yet) for network connection settings. Say I want to temporarily change the IP address of an adapter, what would be the easiest way to open the properties page for the network connection with full privileges, without logging in as another user (fast user switching is disabled)?

Edit:

I'm looking for a solution working on Windows XP (64), where ncpa.cpl does what I want, but seems to just open an Explorer window when started from an Administrator cmd while logged in as a limited user.

  • You can use netsh from the command line to change IP, modify DNS, etc. Examples:

    To change default gateway and IP:

    netsh int ip set address "Local Area Connection" static 10.100.100.10 255.255.255.0 10.100.100.254 1
    

    Changing DNS:

    netsh int ip set dns "Local Area Connection" static 10.100.100.20 primary
    

    Change from static to DHCP:

    netsh int ip set address "Local Area Connection" dhcp
    

    You run those from an elevated command line. The examples above assume the network adapter is "Local Area Connection" (change this accordingly).

    You can read more about netsh at Microsoft.

    NOTE: I believe you can use ncpa.cpl (under system32) to call the Network Connections "folder". This is what you are looking for.

    mghie : Thanks for the tip, +1. That's great for scripts. It looks a little overwhelming for casual use though, any idea how to get to the "normal" properties page?
    David Collantes : I am close to find it. I know that the Control Panel network extension is netcpl.cpl, but it is not found on my XP VM (I run Windows 7 now). I am still researching and will post back.
    David Collantes : Amended my note to add ncpa.cpl. That is your answer, I believe.
    mghie : I'm on Windows XP 64. Entering ncpa.dll in an administrator cmd.exe opens an Explorer window with the root of the system drive (C: in my case). It can't even be started on 32 bit XP without a full path, but then it shows the same Explorer window :-(
    David Collantes : It is ncpa.cpl, not .dll.
    David Collantes : The ncpa.cpl is under system32.
    mghie : Indeed, stupid of me. Still, the point about the Explorer window remains.
    David Collantes : Not sure what could be going on on your side. If I go to my C:\Windows\system32 and right click on "ncpa", pick "Run as...", enter proper credentials, it will open the "Network Connections" window, from which I can modify my "Local Area Connection".
    mghie : I get always the same behaviour: "Run as..." on ncpa.cpl, "ncpa.cpl" in an Administrator cmd, "control ncpa.cpl" in an Administrator cmd - none of them work. All three do work when executed by the current user, which hasn't the necessary privileges. It's frustrating, this is exactly what I am looking for, if only it worked.
    David Collantes : Have you tried on a different machine? I am starting you think you might have problems on that XP machine. I have tried what I wrote on two different machines now (well, a VM and a real machine), and it works flawlessly.
    Oskar Duveborn : This difference in behaviour might as well be caused by Explorer being or not being configured to "launch folders as a separate process"?
  • Try the runas with the following:

    explorer.exe /n,::{7007ACC7-3202-11D1-AAD2-00805FC1270E}
    

    Post back the behavior, please. Aha! This explains why it doesn't work. I did not realized the user I was testing with was part of the Administrators group (totally my fault). Extract from the link:

    "In the system32 folder, the file properties of ncpa.cpl show that it is the “Network Connections Control-Panel Stub”. So why doesn’t RunAs work with Network Connections? Because that stub merely calls the ShellExecuteEx API to launch an item in the shell namespace, which appears as a folder within Explorer."

    The cmd scripts files he refers to there, which are not longer available, can be found on this wiki.

    mghie : Thanks for the link, I'm reading and trying out now, but still it doesn't work completely as advertised. Maybe it's the XP 64? Anyway, thanks for persisting with this, I will mark an accepted answer when I have it working both on XP 32 and XP 64.
    mghie : All is well: It does work exactly as advertised, with minor cosmetic glitches - the network connection symbols don't react on double clicks, and they still have the little "Locked" symbol superimposed, that's what led me to believe it was still not working. However, choosing Properties in the context menu opens a completely functioning page, all options enabled. Thanks a lot, the links to the incredible blog post and the scripts made me accept this for the answer.
  • You should be able to just runas "Control" if you've set Explorer to launch new folder windows as separate processes.

    Then you can just hit whatever item inside it and it should start as that user... there might be a way to force this if you don't have "launch folders as separate processes" by using explorer.exe and its argument /separate

    mghie : Thanks for the tip, +1, I will mark an accepted answer when I have it working both on XP 32 and XP 64.

What is the best Off the shelf Home / small office NAS

I run a multi OS home environment and am looking for a new NAS as central storage.

OS's in environment: OSX, unbuntu ( linux ), windows XP

Current protocols: smb, nfs, bit-torrent, PNP, ssh

Size requirements: > 1TB ( raid 5 /6 )

I currently run a synology 406 which is fine except it has support for up to 500GB disks and my storage requirements are encroaching on its maximum storage space.

  • Obviously it depends on how many drive slots you need but for personal use I'm a big fan of Thecus's products - I have a 5200pro and as it's *nux-based you can extend its functionality quite heavily, certainly with the protocols you're looking for.

    Jon Skeet : I've got the Thecus N2100 and it's great.
    From Chopper3
  • You might want to take a look at DROBO

    They offer a NAS addon. Its not Raid 5 but their own disk mirroring solution. The base version can give you up to 3TB of usable storage if you use 4x1TB disks, and is easy to upgrade plus you dont have to match drives. They also now have a higher end system available its a bit more costly, but takes 8 drives instead of four and allows you do configure a 2 drive failure redundancy (a kind of RAID6) if you wish it.

    i-moan : wonderful! I did not know about this. I wonder if the underlying OS is linux and if its accessible
    Chris : I run a Drobo off of my OS X machine at home, and share the volumes out over the network. It's been rock solid for the last six months. 4 1T drives will give you just about 2.7T usable. If you get the Drobo with the Droboshare NAS add-on (which runs Linux) you can add a bunch of apps including ssh etc..
    i-moan : THANK YOU so much! Since asking I have investigated and found it to be the best option. This month I am about to order the DroboPro 4TB Bundle with (4) Western Digital 1TB Hard Drives DroboPro 4TB Bundle with (4) Western Digital 1TB Hard Drives Thanks
    From Vagnerr
  • I'm using a Windows Home Server. While the Management and automated Backup features are only available on Windows, it works like a normal Windows Server for Fileshares, so it speaks CIFS.

    My specific Model is the HP MediaSmart EX470, which has recently been replaced by a newer model.

    i-moan : I hate down voting but this does not match 90% of my requirements.
    Michael Stum : If it doesn't help then feel free to do so, I'm just not sure which requirement it does not meet - it works with Linux, MacOS and Windows, has much more than 1 TB if you want (mine is 4 TB atm), smb/cifs is the standard anyway, you can install bittorrent on it. Only SSH seems to be missing.
    i-moan : I stand corrected. It does do more than I thought.
  • Friend of mine is using 2-bay D-Link DNS-323, and he's quite happy with it. There is larger 4-bay version (DNS-343) that would fit your needs. It's OS is μClinux based, so should have everything you need.

    From vartec
  • Qnap offers a nice selection of NAS solutions. For example to meet your requirement for RAID 4 you could use QNAP 409

    One of the main reasons I went for QNAP for home NAS was its support of Squeeze Center

    Thoreau : I must second the Qnap. At my office we have the 409U it was cheap, the we threw in 1.5TB Seagates(yes *those* Seagate drives). The Qnap was quick and easy to setup, it comes with loads of features and a very helpful forum and staff.
    i-moan : I knew about QNAP's but info is very useful
    From kristof
  • Last time I was in a shared Win-Mac environment, this little Terastation took care of joint file space. Never had the slightest issue with it. The thing kept itself happily cooled, made no noticeable noise, and had more options than I could shake a stick at.

  • I have requirements similar to yours and when I went looking all the solutions I found were either underperforming (ie. too little RAM/cpu to be able to do anything other than serve files) or overpriced. I ended up building a mini-itx system into a bookshelf case with hotswap bays. I chose one of the new dualcore atoms for lowish power and boot from an sdcard in a usb adapter so that the disks are just for storage. I've got plenty of spare cpu and RAM to be able to run mt-daapd to serve out the music subdir as well as other 'internal' apps.

    From pjz

What's the modern way to open a URL during logon?

Currently I am using this vbs script during logon to a Windows desktop:

Set oIE       = CreateObject("InternetExplorer.Application")  ' This creates a Internet Explorer window
oIE.Left      = -5000                                         ' This allows the page to initilize off-screen
oIE.Top       = -5000                                     ' This allows the page to initilize off-screen
oIE.navigate "http://myurl/?popup=1"                   ' This is URI
oIE.ToolBar   = 0                                         ' This removes the toolbar
oIE.StatusBar = 1                                         ' This removes the status bar
oIE.Resizable = 1                                         ' This allows the maximise button
oIE.Visible   = 1

That is run from the users logon batch file.

I've been asked to add another one and I dislike this approach but don;t know how else to achieve it?

Thanks

Further Detail

The situation is that it will need to be run for only certain users in Active Directory - hopefully they will be in specific OU's and if they are not then I'll put them there! So my approach has been from a Logon Script applied to Groups angle - I didn't want to use the StartUp folder or the registry so that the users that have the screen popped up can easily be managed though Active Directory. Hope that makes more sense!

  • ShellExecute the URL.

    Ant : Why has this been downvoted? Why is it wrong?
  • If it is only for yourself, or you are not worried about users removing it, you could just create a shortcut to the website on the desktop like any other shortcut then move the shortcut to the start menu in All Program > Startup. For all users just drop it in \Documents and Settings\All Users\Start Menu\Programs\Startup, or for a specific user drop it in \Documents and Settings\DESIRED USER\Start Menu\Programs\Startup, taking care to replace DESIRED USER with, well, the desired user.

    Alternatively, you could put the shortcut in a 'secret' such as \WINDOWS and create an entry in the registry at HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run for the current logged user, or under HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run for all users. The name should be a short description of the shortcut, and the value should be where the shortcut resides on the system, appended with .url (e.g. if i saved a shortcut to Google in the windows folder, and named the shortcut google, the value would be "C:\WINDOWS\google.url"

    MrBrutal : The situation is that it will need to be run for only certain users in Active Directory - hopefully they will be in specific OU's and if they are not then I'll put them there! So my approach has been from a Logon Script applied to Groups angle - I didn't want to use the StartUp folder or the registry so that the users that have the screen popped up can easily be managed though Active Directory. Hope that makes more sense!
    From joshhunt
  • Run the command "start http://your-url", perhaps?

    (This has much the same effect as Richard Gadsden's solution, wnoch seems fine to me)

    It would help to know what you dislike about your current solution, though.

    MrBrutal : Yeah, sorry I wasn't very clear - I have had this on my mind for a while and just jumped straight in when the beta occurred. I've added more detail to joshunts answer comments.
    From Paul

Most common Unix/Linux shell

This question might be a little naive, but is there one shell that tends to be the most popular among Unix/Linux users?

My previous company was basically standardized on tcsh, so I learned that one for-better-or-worse, but I'm wondering if I should learn bash, ksh, or any other, if those tend to be more common.

  • For Linux i'd say bash, while I see that classical Unix variants seem to prefer csh.

    David Zaslavsky : +1 bash seems to be nearly ubiquitous in my circles (I'm a Linux person)
    Andy White : That seems pretty accurate, from what I've seen.
    Mikeage : Or at least something bash-like: sh (for fast shell scripts), dash, etc. A lot of older unixes also use ksh.
    Avery Payne : Have seen bash since the Slackware 6 days, and it will probably stay that way for a few more years.
    kubanczyk : Wait the minute... csh? You wanted to say that classical Unix variants seem to prefer ksh, right? AIX, Solaris, HP-UX. For what I know csh has been abandoned a decade ago, except for *BSDs that still stick to tcsh.
  • Mac OS X switched from tcsh to bash as the default login shell around the 10.3 release. Solaris tends to default to sh for root and csh for regular users.

    From Graham Lee
  • Since I want my scripts to run on all systems I want (well, as many as possible), I write and test my scripts with Bourne Shell (/bin/sh) which is present in all Unix systems I ever met.

    Just need to be aware that on some systems /bin/sh is in fact bash (ex: my Mac).

    codebunny : When bash is invoked as sh, it emulates sh so sh scripts will work correctly.
    From chburd
  • Learn sh as your primary shell

    As a result your scripts will also work in bash

    there is also..

    zsh

    ksh

    tcsh

    But to be honest most linux distros link sh to bash ( as its its most popular superset )

    Brian Campbell : Er... csh is not compatible with bash. No one links csh to bash. tcsh is the most common variant of csh, and csh may be linked to tcsh. Many systems link /bin/sh to bash, but some link it to dash (and I think that a few have linked it to zsh).
    i-moan : indeed you are right. thank you :)
    From i-moan
  • I guess I'm a "classical UNIX variant" (from Michael Stum's answer) kind of guy as I use tcsh although I run Scientific (rebranded RHEL like CentOs) and Ubuntu Linux.

    This stems from the fact that some of the software I used when first getting into Linux (and still use) was designed to run under csh and didn't always play nicely with sh/bash. So that's what I learned and I've just stuck with it. The first thing I do when setting up a new account is switch my default shell to tcsh.

    While I can use bash without a problem, the syntax isn't the same between the two shell types (contrary to i-moan's comment) and I actually prefer the tcsh syntax although I believe it is a bit more limited. I don't do all that much shell scripting so I've never had an issue.

    That being said, I agree, like the other commenters, that bash is the most popular, possibly because it has been the default for a long time and people don't generally bother switching from the default as was mentioned by David in the Comparison of Unix Shells question

    From dagorym
  • Use the shell you're most comfortable with, but you should be familiar with the others and their basic operations. I know that Oracle (or at least Oracle DBA's I've known) tends to do things using ksh, while a lot of other scripts (like installers) are run in sh.

    The nice thing is that most distros (linux or other *nix variant) have many of the shells available, so you can still run scripts in shells other than your logged in shell.

    True sh appears to be going away in favor of bash, as Linux has made bash a much more popular shell for working than sh (at least for those "classical UNIX variants".

    I'm currently forcing myself to run in zsh to learn more about it (have usually run bash or tcsh in the past). Then I'll toy with ksh, though I tried that once and really didn't like it.

    From Milner
  • Certainly bash is pretty popular amongst Linux distros - but if you're aiming to be a cross UNIX admin, I think KSH is the one to go with. Wherever I've worked that runs multiple flavours of UNIX, it's the default. It's slightly less broken than the original Bourne Shell, but it hasn't introduced its own features which aren't POSIX compliant (like bash, csh or tcsh did).

    Indeed, this is the reason that the debian based distros now tend to link /bin/sh to dash now - as that is POSIX compliant, and also a lot less memory hungry than bash. Unfortunately a lot shell scripts written for linux distros over the years which say #!/bin/sh on the shebang line actually assume bash is the shell, and use features which are only available in bash - not good practise.

    Bash is good end user shell, but I would argue its large memory footprint makes it less than ideal for shell scripts.

    From GodEater
  • The most popular is bash. The best is zsh, but it's not so much better than bash to convince many people to change their habits.

    Using csh or sh is a mistake. You should only use ksh if you're forced to (bash is not available). If a script must have good portability, either use sh and be aware of what features are unique to your implementation, or consider bash as it's quite widespread.

    I have seen people who are significantly more comfortable with tcsh to prefer it over bash.

    I think it makes sense to habitually use the same shell for interactive use as for scripts. I recommend zsh to people who are newly learning and who have control over their environment, but bash to people who are already slightly comfortable with it. People who may have old shells forced on them or be forced to maintain old scripts should get comfortable with sh, csh, and ksh.

    My comments come from the bias of someone who uses bash regularly, occasionally notices zsh, says "that's cute" and goes back to bash, and who resents being occasionally forced to use sh or csh due to legacy "requirements".

    Note that when I say sh, I mean the old bourne shell, and when I say csh, I mean the old csh. Sometimes these are links to more modern shells.

    Also beware that there are two major releases of ksh out there, with some significant differences in feature sets.

    From carlito
  • It's perl.

    I always find perl even on some old solaris 2.6 and writing perl is far more portable than writing shell script, keeping in mind that /bin/sh may vary a lot. Then instead of writing some csh/ksh/bash, where i'll always find one server with the wrong version of the shell for feature X, i've switched to perl. It's sometimes a bit more verbose, but i get 100% portability for my work.

    And no need to say that a shell itself does nothing: if you don't have to correct grep/awk/sed versions (and there are dozens of awk implementations), you're screwed, while perl pattern matching, data structures, etc are universal.

    From Benoît
  • I'm working with a lot of AIX servers with ksh. It is also the default shell for root in OpenBSD.

    Each time it is possible, I switch the default shell of my user account to zsh.

    I write sh or perl scripts as often as possible.

    From Benoit
  • I'm relatively new in Linux (I've been working with it for about two years). And I think most modern distro's use bash.

    Though I've heard good things about ZSH and it's backwards compatible with BASH

    Check out this link: It gives a good overview of advantages of ZSH over BASH

    From Hofa
  • I would recommend against using 'tcsh' as a shell. It tends to make you think that writing shell scripts in tcsh is ok. It's not.

    The real attraction seems to be the 'up-arrow' command line ease-of-use, but with bash you get that anyway.

    Also, coding scripts is much easier in 'sh' and it's derivatives (like bash and ksh)rather than csh and tcsh. I've also found that sh is on ever flavor of unix, and bash is easily obtainable as a first choice add-on.

    I'd warn against using the features of ksh and bash (like variable arrays and hashs) unless you can guarantee it's existance throughout the enterprise.

    From ericslaw
  • Bash is one of the most popular shells IME.

    If you want to write portable shell scripts use /bin/sh, which is a very small shell. It doesn't have all the features that bash has. Some linux distros have /bin/sh symlinked to /bin/bash, so be careful if you're writing a shell. Ubuntu has /bin/sh symlinked to /bin/dash, which only implements all the /bin/sh stuff. If it works on /bin/dash, then it'll work on sh.