Tuesday, January 18, 2011

Difference between physical and virtual keys

Hi,

I am a member of MAPS (Microsoft Action Pack) and for Windows Server 2008 they have two sets of keys - Physical and Virtual. Can anyone tell me the difference between the two?

I am assuming Virtual is for using in Virtual Machines but this is just a guess.

Thanks

  • That guess is correct. Physical is for a traditional installation, Virtual is for use in a VM.

    From MarkM

Oracle hardware requirements and optimization

I'm an application developer. We have some reporting solution for our client on Oracle database 10g. They would like to speed up execution of some calculation stored procedures. Ok, I know about query plans & optimizations, but also I need to give cost-effective hardware recommendation. Do they need to add more RAM, or more CPU (server is virtual, so it is easy), or maybe they need better storage solution ? How do I know where Oracle hits the wall, what parameters should I measure during the execution of my procedures? Thanks for any advice.

  • You will have to dig into the DBA hills. Oracle 10g has several mechanisms which tell you where it is constrained, like ADDM, AWR, ...

    If you have enterprise license (and your server has enough RAM), then you are allowed to take advantage of the web tool Enterprise Manager where there you will see graphically what is used and where there should be more (RAM, CPU, disk, disk I/O, ...).

    Take a look at database management and configuration management. Its not so hard even for a novice, but do discuss it with some DBA.

    From slovon

Configuring Exchange to relay internal e-mails to an SMTP server

I am working on a SharePoint implementation for a group within an international company. SharePoint has the ability to retrieve e-mails from an SMTP Drop Folder. I have installed the SMTP service on the same server and have been able to send a test e-mail to the server using telnet and an e-mail address of "something@ServerName.fully.qualified.domain"

However, I'm not sure what I need to do to allow the corporate Exchange 2003 environment to recognize the delivery address and forward it to my SMTP server.

I have heard of Exchange SMTP Connectors, but not sure if it is relevant here.

Another idea is to have an MX entry added to the DNS servers, but I would have thought a fully qualified machine name would be sufficient for the Exchange server to route the message to the SMTP server, but seemingly not.

The e-mails that I hope to be receiving will always come from within the AD domain, so I don't have to worry about routing on the internet.

My Exchange knowledge is minimal, so please use small words when offering any suggestions. I am trying to avoid any changes to the Exchange envrionment as this will probably not get approved by the global Exchange admin team.

  • You thinking on the SMTP connector is correct. You need to create an SMTP connector on your Exchange server to forward all mail for ServerName.fully.qualified.domain to be forwarded to the Sharepoint server. The steps to do this are as follows.

    1. Ensure that DNS is setup correctly so that you can fully resolve ServerName.fully.qualified.domain to your Sharepoint server.
    2. Open the Exchange System Manager and navigate to the Connectors Container in your Administrative Group. If you can’t see the Administrative Groups you have to go into the properties of the Exchange Organization object and enable “Administrative Groups” and “Routing Groups” views.
    3. Right click on the connector container and select new SMTP connector.
    4. Under the general tab select the option Forward all mail through this connector to the following smart hosts and, in the corresponding text box , type the name of the Sharepoint Server.
    5. On the local bridgeheads box, select add and pick your Exchange server.
    6. Select the address space tab, click add and select SMTP. Click ok and then in the Internet Address Space Properties dialogue box, in the E-mail domain box, enter the (sub)domain you want forwarding to the Sharepoint server - ServerName.fully.qualified.domain
    7. For connector scope, ensure Entire organization is selected.
    8. On the delivery options tab make sure it is set to always run.
    9. Click ok and apply the new settings.

    Mail should now flow to your Sharepoint server when addressed to that subdomain.

    From Sam Cogan
  • Step by step instructions for configuring incoming email are available here as a PDF.

    From Evan M.

Exim MTA shows dmz address and local domain name in outgoing message headers

We've recently added an Exim-based (MailCleaner) MTA in our DMZ that sends and receives email to and from our LAN email server. It works great, but I'm a little wary of one of the headers it places in outgoing messages sent to external recipients.

Specifically, its the 'Received' header for the delivery from our LAN email server to the MTA in the DMZ, and it looks like:

Received: from [192.168.XX.XX] (helo=mailserver.localdomain.local) by mail.senderdomain.com stage1 with esmtp  with id SomeMessageID for <recipientemail@recipientserver.com> from sendername <senderemail@senderdomain.com>; Tue, 24 Nov 2009 13:06:58

Where 192.168.XX.XX is the DMZ interface of the LAN mail server, localdomain.local is our LAN domain name, and senderdomain.com is the externally-resolvable domain name for our organization.

Is it possible to modify this header so it doesn't divulge our local domain name and DMZ address range on every outgoing message? I assume we can't simply remove it, since out of the several 'Received' headers in the delivered messages I've been able to examine it's the only line that contains the 'from Sendername ' portion identifying the sender's email address in our organization.

Any hints about how to modify or mask this would be appreciated.

  • The content of the Received: header is defined in Exim by the configuration option received_header_text. The default setting, from which you can see how your example is constructed, is:

    received_header_text = Received: \
      ${if def:sender_rcvhost {from $sender_rcvhost\n\t}\
      {${if def:sender_ident \
      {from ${quote_local_part:$sender_ident} }}\
      ${if def:sender_helo_name {(helo=$sender_helo_name)\n\t}}}}\
      by $primary_hostname \
      ${if def:received_protocol {with $received_protocol}} \
      ${if def:tls_cipher {($tls_cipher)\n\t}}\
      (Exim $version_number)\n\t\
      ${if def:sender_address \
      {(envelope-from <$sender_address>)\n\t}}\
      id $message_exim_id\
      ${if def:received_for {\n\tfor $received_for}}
    

    As for changing or remove the header. Beware some best practice advice lies ahead..

    • Are you sure that you want to remove this information? It's presence allows you to track abuse reports much more easily. The exposure of your internal IP addresses is actually of fairly limited risk.

    • Technically you can remove this first received header by using headers_remove but it's certainly not RFC friendly and there is a chance of creating mail loops.

    • If you must mask the information then you would be best to do so by modifying received_header_text. For maintainability and the principle of least surprise, even if the MTA isn't performing any other functions, you probably want to make your changes as specific as possible. This would involve putting some additional conditions in those if statements for facts that you know will always be true, such as whether the sender has authenticated themselves.

    nedm : This is just what I was looking for -- not sure how I missed this section in the conf file! I was aware of the headers_remove "nuclear" option, but it didn't seem like a good idea. I'll probably just add a check as you suggest and then obfuscate the DMZ and local doman info. Thanks much.
    From Dan Carley

Changing to a new certificate authority for existing website

Hi,

We have a website certificate issued that will expire soon. The current certificate issuer is charging us too much and we would like to change to a new company.

If we get a new certificate from another company (which would be properly certified and all), would our users get any warnings in their browsers?

  • No, they will not receive a warning, as long as the new certificate is issued with a CA root that the client trusts.

  • +1 Chris.

    The only additional thing to beware of is whether your new certificate provider uses chained certificates. If they do, you will need to ensure that your web server is configured to deliver the chain, when you install the new certificate.

    From Dan Carley

Allowing hosts to connect remotely to MySQL?

How do you allow hosts X to connect remotely to MySQL and have full access?

  • In your MySQL configuration, /etc/mysql/my.cnf or similar, you most likely have one or two of these settings: skip-networking and bind-address. In case you have skip-networking you obviously want to remove it.

    Regarding bind-address in might be set to only bind/listen to 127.0.0.1 (aka: localhost). One solution is to also remove that option completely, falling back on the default of listening on every interface. Another solution is to explicitly set the ip address of the network interface you want mysqld to listen on.

    Also, remember that when you create MySQL users you have to be explicit on where from they are allowed to connect. Here are a few examples.

    'andreas'@'localhost'
    'olsson'@'192.168.1.42'
    'andol'@'%'
    
    Chris_45 : Ok but in Win32?
    rahul286 : On Win32 also u get mysql shell where u can use GRANT command as I mentioned in my answer below. Sorry if I'm overestimating windows (haven't used it in long time)
    Chris_45 : But there is no my.cnf in Win-environment?
    lg : In win environment there is mysql.ini or my.ini
    From andol
  • I often need to access to mysql from remote locations and this is what I do [warning: its not smartest or most secure way but gets job done for me... ;-)].

    I log into mysql prompt on database server and execute command like:

    grant all privileges on *.* to 'username'@'%' identified by 'userpassword';

    This grants all privileges to mysql user 'username' from any location (indicated by '%') using password 'userpassword'

    In above line *.* indicates access to all databases and all tables.

    You can specify like dbname.* in case you want to limit access to dbname.

    You can also replace *.* with dbname.tablename in case you want to limit access at table level.

    For more info: Check MySQL ref for GRANT Command

    Chris_45 : Ok , great I took a user and I missed the semicolon btw.
    From rahul286

Maximum RAM on Biostar P4M8PM7 Socket775 mATX board

I have a server with a Biostar P4M8PM7 ("Pro-M7") board based on a VIA chipset. It's a strange board to put in a server because it seems like more of a desktop board to me, but alas!

It takes DDR2-667 (PC5300) RAM. What I can't figure out is the maximum amount I can put in it, as I cannot find the manual anywhere online. I've found a few marketing broadsheets from online retailers that say, "up to 2 GB of RAM!" but I am not sure whether to believe them. They also do not seem to be quite for the same board, as they indicate DDR2 400/533 RAM, for example: http://www.geeks.com/details.asp?invtid=P4M8P-M7. The manufacturer's web site says the same thing, but does not elaborate.

It's a 64-bit CPU and board; is there a technical reason why the board would not be able to address more than 2 GB? Can someone tell me what sort of that reason that would be? I bought this server from someone really hoping I could put 8 to 16 in it, and wanted to do some research before I gave up.

On a related note, it's not indicated anywhere whether it can take ECC RAM; the existing chips are not ECC, but most memory sold in the range I'm looking for (e.g. DIMMs with enough chip density to do 8 GB) seems to be server-class and for that reason ECC. Any ideas?

Thank you very much for your consideration in advance!

  • It only has two memory slots and uses a VIA P4M800 Pro chipset which can only handle 2GB (i.e. 2 x 1GB modules). It can't utilise ECC.

    From Chopper3

mod_wsgi+Django with a different Python version

My server runs Python 2.4 by default, and I've used make altinstall to get an alternate Python 2.6 installation, for my Django webapp.

However, mod_wsgi seems to be defaulting to using /usr/bin/python (2.4) rather than /usr/local/bin/python2.6.

Is there a simple way to force mod_wsgi to use Python 2.6?

  • Read the documentation for mod_wsgi and it tells you what to do. See:

    http://code.google.com/p/modwsgi/wiki/InstallationIssues#Multiple_Python_Versions

    Specifically, use the WSGIPythonHome directive to tell mod_wsgi that your Python is actually in a different location.

    If this doesn't work, then make sure you are no longer loading mod_python into same Apache if you had been previously. Perform a complete stop and start of Apache, not just a reload, for good measure to ensure that mod_python no longer hanging around. The mod_python module cant be used at the same time because it will take precedence in initialising Python and will use what ever it is compiled against, which could well be different to mod_wsgi. Normally this mixing will cause a crash, but feasible it may carry along a little bit before deciding to croak it.

    BobMarley : Sorry, I don't know how I missed that bit of documentation! The documentation on the -fPIC error proved helpful, too. Thank you so much!

Is it possible to have a unique .bash_history file per host?

My home directory is mounted on an NFS mount. The commands I use on one machine are usually quite different to those on another. Is it possible to have easy host write to it's own history file?

  • There is the environment variable HISTFILE, which controls where the history file is. You could try to create a login script that resets HISTFILE according to your hostname/IP.

    From Thilo
  • It certainly is. You just have to change the name of your history file on each host. In your .bash_profile put something like:

    export HISTFILE="${HOME}/.bash_history.`hostname`"
    
    brianegge : My bash wouldn't expand ~, so I had to use ${HOME} export HISTFILE="${HOME}/.bash_history.$(hostname)"
    Kamil Kisiel : Thanks, modified the answer.

Windows 2003 "Windows cannot display Windows firewall settings"

Our Windows Server 2003 SP2 has been hit by a hacker that installed a few services and other niceties and disabled our firewall.

I've managed to clean up (the strange thing is that no detection software I used was able to find anything wrong) but the firewall is still down and I'm getting the above error when trying to open its configuration.

I've followed a few links but nothing seems to work. Is there a way to re-install it?

Stuff I tried:

  • If your server was really hit by a relatively adept intruder, you need to wipe and re-image it. There's no telling what he or she may have done or planted, not to mention that the time required to be reasonably sure of successful "cleanup" would take longer than rebuilding a server.

    From phoebus

Explain the use of supervisor modules in MDS 9000 switches

Hello! Can anyone explain to me the use of supervisor modules in MDS 9000 series switches (MDS 9513 has 2 supervisor modules)?

  • In the cisco world supervisor modules are the modules that run IOS they are where you do management (or supervisory) functions. The fact that it has two mean that they are redundant so you can lose on module and keep on running. You will generally see these in higher end "blade" type switches - specifically the Catalyst 4000 and 6500 lines.

    From Zypher

Bandwidth limitations of network loopback device?

I have several JVMs that all listen to TCP 80 each bound on their own 127.0.100.1 -> 127.0.100.255 range.

Is there a theoretical limit to the effective bandwidth that can be pushed through the LO device? Is this simply a limitation of the kernel and TCP stack vs. the limitations of a 'regular' network interface?

  • You're correct, there's no NIC-speed issues, just your CPU/s, kernel and stack.

    From Chopper3

Cat5e cable: what are each color used for?

I have been curious on this for a while since there are some people who use a modified CAT5e cable with a switch to disconnect data sending without actually disconnecting the whole internet connection.

For the above statement: I will explain further: they cut the green wire inside the CAT5e cable, attach a switch and from there they can just simply switch on and off without actually disconnecting.

Can someone explain what colors correspond to what functions in Cat5e?

If so I would be very appreciative. Thank you very much!

  • SEMI-DUPLEX:
    1. TX+: Simplex (Transmit), positive
    2. TX-: Simplex (Transmit), negative
    3. RX+: Simplex (Receive), positive
    6. RX-: Simplex (Receive), negative

    FULL DUPLEX:

    1. TX_D1+: Simplex (Transmit), positive
    2. TX_D1-: Simplex (Transmit), negative
    3. RX_D2+: Simplex (Receive), positive
    4. RX_D2-: Simplex (Receive), negative
    5. BI_D3+: Duplex channel 1, positive
    6. BI_D3-: Duplex channel 1, negative
    7. BI_D4+: Duplex channel 2, positive
    8. BI_D4-: Duplex channel 2, negative
    DucDigital : interesting, that's quite clear enough with the picture from Denis below... :)
    From o_O Tync
  • It depends on the wiring scheme. This table is copied from the Wikipedia page for TIA/EIA-568-B:

    alt text

How many physical CPUs on virtual server (vmware)

I have a Windows server running under VMWare.

CPUs are reported as Dual 2.40 GHz Intel Xeon E7330.

Is this the real number of CPUs? How can I find the real number of CPUs?

  • 7000-series could have two or four physical CPUs. I usually see this config set up with four CPUs.

    From ewwhite
  • That would be the number of CPU's presented to the machine via VMWare. You would need to check the VIC to see the exact configuration of the physical server.

    From Zypher
  • From within the Windows VM, you're only seeing the number of vCPUs that are allocated to the VM by the virtual environment administrator. That number can be less than or equal to the number of real cores on the ESX host, but no higher.

    If you have access to the VirtualCenter VI Client console, you can see the ESX Host summary page for the number of physical CPU cores on each host.

    There's still a bit of fact checking to establish how many cores/sockets you really have, depending on what version you're running and whether your CPUs are multi-core/have HT enabled. The CPUs you've listed are quad-core.

    Short answer: You can't tell how many CPUs the host has from within the VM.

    frankadelic : Are the specs of the CPUs inside listed inside the VM the same as the host. So, the VM listed Dual 2.40 GHz Intel Xeon E7330 -- does that mean these CPUs are on the host? Or could they be completely different, AMD for example?
    ewwhite : It means that the host CPU is that speed and model.
    Chris Thorpe : Typically they will be the same, but there's an ESX feature called CPU Masking which could be in use, and that would misrepresent the CPU ID the a guest. The likelyhood of CPU Masking being set is quite low, as it's used when you have a disparate collection of hosts in an ESX cluster. When it is, the CPU reported to the guest is an older generation than the true CPU model.

Ubuntu Server and Tomcat 6 WebApp SecurityUtil Exception

I am running clean install of Ubuntu Server 9.10 with Tomcat 6 installed from the Ubuntu installation. When I upload a WebApp through the Tomcat Manager and it starts automatically on boot with /etc/init.d/tomcat6 start my Lib Jars in my Web Apps WebContent/WEB-INF/lib throw an exception.

I am using Project ROME for my RSS Feed which works fine on my local tomcat server I test on through Eclipse. However, when I run it on the Ubuntu Tomcat I get a ServletException:

javax.servlet.ServletException: Could not initialize class com.sun.syndication.feed.synd.SyndFeedImpl
    org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:294)
    org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:162)

I am guessing I need to change one of the files in /etc/tomcat6/policy.d/ which generates the /var/cache/tomcat6/catalina.policy file. But I don't know what to change. Help please!

  • Okay, I got something that works.

    I edited the /etc/tomcat6/policy.d/01system.policy and added at the bottom:

    // Grant WebApps All Permission
    grant codeBase "file:/var/lib/tomcat6/webapps/-" {
        permission java.security.AllPermission;
    };
    

    This works now, but not sure if its the right thing to do.

  • You could disable the security manager for Tomcat as whole.

    Edit /etc/default/tomcat6 and set TOMCAT6_SECURITY to no.

    TOMCAT6_SECURITY=no
    

    (Make sure the line is uncommented)

    Be sure you understand the implications of doing this though.

config TFS 2010 beta 2 on local network with ssl

in this case i can only use self signed certificates. i can only generate the certs with the computer name or IP. what do I need to do so that the local clients will recognize the cert?

  • afaik, unless you use a commercial cert or setup your own internal ca, you'll need to manually install the cert into the cert store on each client.

    From joeqwerty

Tracking down load average

The "load average" on a *nix machine is the "average length of the run queue", or in other words, the average number of processes that are doing something (or waiting to do something). While the concept is simple enough to understand, troubleshooting the problem can be less straight-forward.

Here's the statistics on a server I worked on today that made me wonder the best way to fix this sort of thing. Here's the statistics:

  • 1GB RAM free, 0 swap space usage
  • CPU times around 20% user, 30% wait, 50% idle (according to top)
  • About 2 to 3 processes in either "R" or "D" state at a time (tested using ps | grep)
  • Server logs free of any error messages indicating hardware problems
  • Load average around 25.0 (for all 3 averages)
  • Server visibly unresponsive for users

I eventually "fixed" the problem by restarting MySQLd... which doesn't make a lot of sense, because according to mysql's "show processlist" command, the server was theoretically idle.

What other tools/metrics should I have used to help diagnose this issue and possibly determine what was causing the server load to run so high?

  • It sounds like your server is IO bound - hence the processes sat in D state.

    Use iostat to see what the load is on your disks.

    If MySQL is causing lots of disk seeks then consider putting your MySQL data on a completely separate physical disk. If it's still slow and it's part of a master-slave setup, put the replication logs onto a separate disk too.

    Note that a separate partition or logical disk isn't enough - head seek times are generally the limiting factor, not data transfer rates.

    womble : Don't forget, also, that each waiting *thread* counts towards the load average, which ps doesn't show individually unless you ask it nicely.
    From Alnitak
  • You didn't run out of space, did you? You mention no hardware problems, lots of free ram, etc. Either no more free space (perhaps in /var?) or your mysql db is mounted on a remote drive and there are network issues.

    womble : Load average doesn't rise because you're out of disk space.
    lorenzog : you're right, it doesn't. But if a process is spin-looping waiting to write to disk, the CPU usage goes up..
    From lorenzog
  • In situations like this I like to have Munin, or similar, monitor the server in question. That way you get a history, presented in graph form, which might very well give good hints in what area the load originally started to manifest itself.Also, a default install of Munin comes with a good set of preperd tests.

    From andol
  • Having a load average of 25 and only 2-3 Processes which are requesting CPU sound a bit weird. A load of 25 means there are constantly 25 Processes in your system wich are in Running (R) or Uninteruptable (D) state. Some comment notices that threads which are not shown in ps aux are counted like an active process in the run queue. You can see Thread with ps axms. It depends on the System used how they are counted exactly in the load.

    But what is really important to know. The load has absolutely nothing to do with CPU utilization. If each of this processes only uses 1% CPU and then blocks you have an average load of 25 also.

    So my guess is that at the time your load pushes up to 25 you have too many processes that needs io and don´t get it. So they block and are waiting for input or write access. They all land in the actual run queue and your load pushes that high.

    If you only have 2-3 processes active watch out for threads. Your System can only reach a load average of 25 if processes and/or threads are in the sum 25 at a given time period.

    If this is constantly you have a problem. If this is only one or two times each day, watch out for IO expensive cronjobs and modify the time they are executed.

    Also another problem can be a script or programm wich starts 25 threads or processes at a given time and these processes or threads are blocking each other. I guess you CPU utilization at the given time is very high also and the system doesn´t satisfy all the requests wich are requested at this time.

    If you hav a kernel > 2.6.20 I suggest iotop over vmstat. iotop shows you the cureent IO of the system in a realtime top like view. Maybe this will help you.

    Another great tool to show CPU Usage and processes is htop. It shows you CPU utilisation of each cpu as a little graph, all three loads + graphical bar of mem and swap space currently used.

    From evildead

Where could I find some network architecture courses or good books?

The best source that I found is not really a course, is most like a blog with short stories about succed cases of architecture strategies in highscalability.com

My concern is if is there any good course or book about IT architecture strategies that I could trust?

Best,

  • Take a course offered by a renowned network services vendor, such as Cisco. It does cost a lot, but they teach you a lot, and you can also get a certificate which is valued.

    For more - start doing it. Buy some switches, little routers, PCs, ... and put together three networks with several virtual machines, then impose some restrictions, traffic shaping and so on.

    If you're not afraid of some unix, you can do it all in virutal world in Solaris, using Zones (really simple virtual "hosts") and Crossbow (virtual newtorking). There are a lot of manuals and online material on these.

    In any case - enjoy!

    From slovon
  • If anecdotes are what you learn best from, I would suggest reading "Practice of System and Network Administration". While the subject matter goes beyond actual network architecture, there are two chapters specifically that might interest you: Chapter 6 "Datacenters" and Chapter 7 "Networks". However, I believe that useful information about architectural decisions can be found throughout the book.

    In fact, if you read the entire book you can consider yourself to have a degree in practical computer science. =)

SSH socks proxy tunnel without interactive session?

I use

ssh -D 1080 myhost.org

...to open up an SSH tunnel from my work machine to my home machine, so as to bypass the idiotic content filter on the corporate firewall. However this also creates an interactive SSH session that lives the whole time I'm using the tunnel. Is there any way to tell SSH just to create the tunnel and not bother with the interactive session?

  • Seems like you want the -N flag.

    ssh -D 1080 -N myhost.org
    
    o_O Tync : It's recommended to use -fnN in conjunction.
    From andol
  • from ssh manpage:

     -N      Do not execute a remote command. This is
             useful for just forwarding ports (protocol version 2 only).
    
    From dschulz
  • ssh -ND 1080 @myhomemachine.com

Why does copying a large file cause so much memory activity?

I'm currently copying six large files across a 100MBit/s network from my Windows XP system to a Windows 2003 server. My PC is rather unresponsive and when I look at perfmon, I see that PhysicalDisk and Processor is bouncing around 20% but memory (pages/sec) is at solid maximum.

What is causing this? I thought the memory monitor was a generally indicating how much virtual memory page was being used.

  • How large are the files? Have a look at Slow Large File Copy Issues. This thread discusses it a bit more with some work-arounds and such. I've switched to using RichCopy when copying large files.

    Rob Nicholson : The files were huge video files - about 30GB. Will check out richcopy. Often used robocopy but always on look out for good alternatives.
    From Chris_K

Using WHM, is there a way to batch make and download backups?

I've got 10 accounts on one server. I want to make a full cpanel backup of each one and download them all to my external hard drive.

I do this manually, individually, which takes about an hour.

On a system using WHM (on a RedHat server), is there a way to batch make and download cpanel backups?

  • I think if you have shell access then you can write shell script to backup all the accounts and put in one safe directory , from where you can download all of them

    From Master

How do I set the global PATH environment in a batch file?

The group policy in our environment overwrites the PATH variable every time I log on and, as I run a 'non-standard' computer, it gets it completely wrong (C:\Windows vs C:\WINNT, missing directories etc). Currently, I manually change it every time I log on, but that is starting to get tiresome.

If I use the SET command to change the PATH variable in a batch file, it only has local scope so the change only applies to the commands in the batch file.

set PATH=C:\WINNT;C:\WINNT\System32
set PATH

This batch file will output the new path, but if I run set PATH on the command line afterwards, it will still be the original path.

How do I set the global PATH environment in a batch file? Or is there another technique I can use?

  • This is edited in system preferences -> [Environment variables]. There you add paths to $PATH

    TallGuy : That's what I'm doing at the moment, every time I log on. I want to do it in a batch file so it can be done automatically.
    From o_O Tync
  • You can use the setx command:

    setx PATH C:\WINNT;C:\WINNT\System32 /m
    

    Setx is available in Windows 2003 and later, but can be downloaded in the Support Tools for Windows XP.

    TallGuy : Brilliant! Thank you.
    From Phil Ross
  • To set your path in the registry so that it propagates to , you can create a PowerShell script that uses some variation of this:

    [System.Environment]::SetEnvironmentVariable("PATH", $Env:Path + ";newpart", "user")
    

    But when I tried it just now then looked at System Properties>Environment Variables it not only added my test path, but doubled the existing one. So that problem needs to be worked out.

    Based on this page.

ISA 2000 and COD MW2 Steam

OK, so maybe not the "proper use" of network resources, but we enjoy the odd COD game during lunch hours. When we played COD4, we had a dedicated server setup at the back of the server room. With MW2, we need to be able to connect to steam to be able to play multi-player.

I've found this support article here:

https://support.steampowered.com/kb%5Farticle.php?ref=8571-GLVN-8711

Which outlines all the ports I need to open. I went through and created the following rules in ISA 2000 (I'm stuck with 2000 for now).

Protocol Definition: Steam Primary connection: Port 27000, UDP, Send Receive Secondary Connection: Port range 27001-27030 Send Receive

Protocol Definition: Steam TCP In Primary connection: 27014, TCP, Inbound Secondary Connection: Port range: 27015-27050, Inbound

Protocol Definition: Steam 4380 Primary connection: 4380, UDP, Send Receive

When I start steam on my local workstation (I did add an exception to the Vista Firewall to allow steam), the steam client sits on "Updating Steam" for 5 minutes then errors out with: You must connect to the internet first.

Any ideas? I assume I missed something.

Thanks for your help.

  • Why did you define TCP inbound connections? They should be outbound, if the internal client is going to talk to some Internet server using those ports.

    Also, after defining those protocols, did you actually create a policy to allow them from the internal network to the external one?

    twlichty : Sorry, typo. Yes, they are outbound. Yes, I also created a Protocol Rule to allow access from internal to external.
    From Massimo
  • Is your proxy server transparent or explicitly defined on the clients?

    Are your programs definitely routing through the proxy (can you see blocked traffic in ISA monitor?)

    It may be the case that your apps are not hitting the proxy and are trying to jump directly through the network gateway.

Do you skip a rack unit between servers?

It seems like there's a lot of disagreement in mindsets when it comes to installing rackmount servers. There have been threads discussing cable arms and other rackmount accessories, but I'm curious:

Do you leave an empty rack unit between your servers when you install them? Why or why not? Do you have any empirical evidence to support your ideas? Is anyone aware of a study which proves conclusively whether one is better or not?

  • I have them stacked in the one rack I have. Never had any problems with it, so I never had any reason to space them out. I would imagine the biggest reason people would space them out is heat.

  • I have never skipped rack units between rackmount devices in a cabinet. If a manufacturer instructed me to skip U's between devices I would, but I've never seen such a recommendation.

    I would expect that any device designed for rack mounting would exhaust its heat through either the front or rear panels. Some heat is going to be conducted through the rails and top and bottom of the chassis, but I would expect that to be very small in comparison to the radiation from the front and rear.

    Doug Luxem : In fact, if you are skipping rack units, you need to use covers between each server otherwise you will get air mixing between your high and cold aisles.
  • I usually leave a blank RU after around 5RU of servers (ie 5x1ru or 1x2ru + 1x3ru) and that would be dependent on the cooling setup in the data centre your in. If you have cooling done in front of the rack (ie a grate in front of the rack) the idea is that the cool air is pushed up from the floor and your servers suck the cool air through them. in this circumstance you would typically get better cooling by not leaving blank slots (ie use a blank RU cover. But if you have cooling done through the floor panel in your rack from my experience you get more efficient cooling by breaking up servers from being piled on top of each other for the entire rack

    From Brendan
  • Every third, but that's due to management arms and the need to work around them rather than heat. The fact that those servers each have 6 Cat5 cables going to them doesn't help. We do make heavy use of blanking panels, and air-dams on top of the racks to prevent recirculation from the hot-aisle.

    Also, one thing we have no lack of in our data-center is space. It was designed for expansion back when 7-10U servers were standard. Now that we've gone with rack-dense ESX clusters it is a ghost town in there.

    Matt Simmons : OK, 6 cables, let's see...2 management interfaces, 2 iscsi interfaces and 2....dedicated to the cluster manager?
    Doug Luxem : Don't use management arms and you don't have to skip units. :)
    sysadmin1138 : 6 cables: 1x HP iLO card, 1x ESX mgmt LUN, 4x VM Luns. Also, fibers for the SAN. We haven't gone iSCSI yet. If we were willing to fully undress the servers before pulling them out, we'd definitely go w/o the arms.
    Laura Thomas : Having seen your data center I have to wonder if you set things up in a traditional hot aisle cold aisle setup rather than scattered around your giant room if you'd get better thermal performance.
  • If your servers use front to back flow-through cooling, as most rack mounted servers do, leaving gaps can actually hurt cooling. You don't want the cold air to have any way to get to the hot aisle except through the server itself. If you need to leave gaps (for power concerns, floor weight issues, etc) you should use blanking panels so air can't pass between the servers.

    Thomas : Yes, if you leave a gap, you need to fill it with a panel to prevent that.
    joeqwerty : =1. rack mount servers and racks are designed for air flow with all panels and bezels on and all u's filled. much like the air flow in a pc is designed for having all the covers on. circumventing the design by leaving gaps and\or removing panels and covers is likely to do more harm than good.
    From jj33
  • I don't skip Us. We rent and Us cost money.

    No reason to for heat these days. All the cool air comes in the front, and out the back. There's no vent holes in the tops any more.

    From mrdenny
  • We have 3 1/2 racks worth of cluster nodes and their storage in a colocation facility. The only places we've skipped U's is where we need to route network cabling to the central rack where the core cluster switch is located. We can afford to do so space wise since the racks are already maxed out in terms of power, so it wouldn't be possible to cram more nodes in to them :)

    These machines run 24/7 at 100% CPU, and some of them have up to 16 cores in a 1U box (4x quad core Xeons) and I've yet to see any negative effects of not leaving spaces between most of them.

    So long as your equipment has a well designed air path I don't see why it would matter.

  • Don't leave space if you have cool air coming from the floor and also use blanks in unused u space. If you just have a low-tech cooling system using a standard a/c unit it is best to leave gaps to minimize hot spots when you have hot servers clumped together.

    pauska : If your servers use front-to-back fan cooling its not wise at all to leave gaps, it will hurt the airflow.
    From Asa Gage
  • In our data center we do not leave gaps. We have cool air coming up from the floor and gaps cause airflow problems. If we do have a gap for some reason we cover it with a blank plate. Adding blank plates immediately made the tops of our cold aisles colder and our hot aisles hotter.

    I don't think I have the data or graphs anymore but the difference was very clear as soon as we started making changes. Servers at the tops of the racks stopped overheating. We stopped cooking power supplies (which we were doing at a rate of about 1/week). I know the changes were started after our data center manager came back from a Sun green data center expo, where he sat in some seminars about cooling and the like. Prior to this we had been using gaps and partially filled racks and perforated tiles in the floor in front and behind the racks.

    Even with the management arms in place eliminating gaps has worked out better. All our server internal temperatures everywhere in the room are now well within spec. This was not the case before we standardized our cable management and eliminated the gaps, and corrected our floor tile placement. We'd like to do more to direct the hot air back to the CRAC units, but we can't get funding yet.

  • No gaps, except where we've taken a server or something else out and not bothered to re-arrange. I think we're a bit smaller than many people here, with 2 racks that only have about 15 servers plus a few tape drives and switches and UPSes.

    From Ward
  • No gaps other than when planning for expanding san systems or things like that. We prefer to put new cabinets close to the actual controllers.

    If you have proper cooling, leaving gaps will not be beneficial unless the server is poorly constructed.

    From chankster
  • Leaving gaps between servers can affect cooling. Many data centres operate suites on a 'hot aisle' 'cold aisle' basis.

    If you leave gaps between servers then you can affect efficient airflow and cooling.

    This article may be of interest:

    Alternating Cold and Hot Aisles Provides More Reliable Cooling for Server Farms

    From Kev
  • I get the impression (perhaps wrongly) that it is a more popular practice in some telecoms environments where hot/cold aisles aren't so widely used.

    It's not suited to a high density and well run datacentre though.

    From Dan Carley
  • I have large gaps above my UPS (for installing a second battery in the futurue) and above my tape library (if I need another one). Other than that I dont have gaps, and I use panels to fill up empty spaces to preserve airflow.

    From pauska
  • Google is not leaving U between servers, and i guess they are concerned by heat management. Always interesting to watch how big players do the job. Here is a video of one of their datacenter: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=player_embedded

    Go directly to 4:21 to see their servers.

  • In a situation where we had our own datacenter and space wasn't a problem, I used to skip a U (with a spacer to block airflow) between logical areas: web servers has one section, database, domain controller, e-mail, and file-server had another, and firewalls and routers had another. Switches and patch panels for outlying desktops were in their own rack.

    I can remember exactly one occasion where I skipped a U for cooling reasons. This was an A/V cable TV looping solution in a high school, where there were three units that were each responsible for serving the cable TV system to one section of the building. After the top unit had to be replaced for the third time in two years due to overheating, I performed some "surgery" on the rack to make mounting holes so I could leave 1/2U of space between each of the three units (for a total of 1 U total space).

    This did solve the problem. Needless to say this was thoroughly documented, and for extra good measure I taped a sheet to the top of one them in gaps explaining why things were the way they where.

    There are two lessons here:

    1. Leaving a gap for cooling is only done in exception circumstances.
    2. Use a reputable case or server vendor. Be careful of buying equipment that tries to pack 2U worth of heat into 1U worth of space. This will be tempting, because the 1U system may appear to be much cheaper. And be careful of buying an off-brand case that hasn't adequately accounted for air-flow.
    From Joel Coel
  • I wouldn't leave gaps between servers, but I will for things like LAN switches - this allows me to put some 1U cable management bars above and below... but it's definitely not done for cooling.