Wednesday, January 19, 2011

Admin authentication on a Windows XP machine on a network

Hello,

This is something I'm curious about. There is a LAN consisting of Windows XP machines. There is an administrator account say admin-xyz which can be used to login to any of the machines on the network. But, when I run pwdump to get the password hashes on a machine, I dont see the admin-xyz account. I am just curious as to how the authentication happens. Can someone please shed some light on this?

Thank You.

  • Most likely it's only dumping the LOCAL hashes from the SAM, and you are looking for the domain admin account which is not stored locally.

    From Dan

DNS managing, how to do if i have one ip on my vps.

Hello there, i'm using vps, which has one ip address, i want to use it for all my domains. But domain registrants needs two ns servers as minimum. How should i set up things for get all my domains associated with my vps server? Thanks.

  • Your VPS service ought to offer secondary DNS for you, perhaps free or for a small fee. Or maybe your domain registrar can do it. Have you looked into either of those options?

  • thanks. I have few domains from registrant where no dns service providing. What you can recommend in this situation? Rent second ip? Then what i should do?

  • afraid.org offers free DNS services. They will also act as a secondary for you :)

  • There are many companies that offer DNS hosting: zoneedit has an (initially) free service that may suit your needs, but they'll charge if you have >5 domains, or if any of your domains have high traffic. dyndns.com also offer a commercial zone hosting service.

    (I've not used either of these, but I've heard good things about zoneedit from friends)

  • For the company domain names I'm hosting DNS internally and on my home network (for now) but am also using everydns as a secondarie. It's a free service. When I registered my personal domain I only had a single DNS server available so used that IP address for both server entries (but with different names) and created the glue records accordingly. That allowed me to satisfy the two server requirement with a single server. Once I had had another DNS server to use I simply updated the records.

Send Incoming Email to Multiple Mailboxes

Hey Everybody,

I have a bit of an interesting email situation I'm trying to set up for a small business, that is probably far from normal, but I'm hoping someone can send me along the right path.

Suppose I have two users, joe and john with emails joe@domain and john@domain. Both of these users share many of the same responsibilities, and trust between the two users is not an issue, so each of them wants to see the other's emails and wants the other to see his. However, each wants to send from their own personal email address.

So basically, emails to joe@domain should be sent to joe's mailbox and john's mailbox. Emails to john@domain should be sent to joe's mailbox and john's mailbox. And emails sent by either user (in outlook, this really isn't the hard part) should be sent from their own personal accounts.

Sharing a single mailbox isn't an option either because both users have different methods or organizing and tracking their emails, so I do want two seperate distinct mailboxes that contain ALL of the emails.

I'm hoping someone can help me out, I'm rather new to the exchange thing, so its possible that I'm missing something simple. In case it matters, we are running SBS 2008, but I'm fully comftorable in exchange mangement console/shell.

  • You could use the forwarding option in exchange.

    See this guide and start from step 9 http://www.msexchange.org/tutorials/MF015.html

    LorenVS : Sounds like exactly what I need, just going to make sure I can find it in the server management console...
    LorenVS : I managed to find it, you gave me the right option, but its buried in a slightly different location in the 2008 world (Its on mailbox properties in exchange management console). Thanks though... Sorry for not being able to upvote, I'm usually on stack overflow, so I have no rep here :(
    From Edward J
  • How about a distribution list? We use a lot of distribution lists setup at work and you can add/remove people from it with relative ease.

    LorenVS : I was initially looking at using distribution lists, the problem was that the email addresses I was using were the personal email addresses for the account. I didn't want joe@domain to be a distribution list because then I would have to change his personal email address to something else... Thanks though
    From Glen Y.

Alternative routers to the Cisco SA 500

We are evaluating the Cisco SA 500 router for our new office router. Would anyone recommend another similarly-featured router from another manufacturer?

Requirements: - Office of 14 people - We are likely to switch to 14 VOIP phones (Linksys SPA-942) soon - We want to use VPN on the router, if possible, with Windows and Mac users

Accepting a connection on a socket on Windows 7 takes more than a second

Here's what I've done:

  • I wrote a minimal web server (using Qt, but I don't think it's relevant here).
  • I'm running it on a legal Windows 7 32-bit.

The problem:

  • If I make a request with Firefox, IE, Chrome or Safari it takes takes about one second before my server sees that there is a new connection to be accepted.

Clues:

  • Using other clients (wget, own test client that just opens a socket) than Firefox, IE, Chrome, Safari seeing the new connection is matter of milliseconds.
  • I installed Apache and tried the clients mentioned above. Serving the request takes ~50ms as expected.
  • The problem isn't reproducible when running Windows XP (or compiling and running the same code under Linux)
  • The problem seems to present itself only when connecting to localhost. A friend connected over the Internet and serving the connection was a matter of milliseconds.
  • Running the server in different ports has no effect on the 1 second latency

Here's what I've tried without luck:

  • Stopped the Windows Defender service
  • Stopped the Windows Firewall service

Any ideas? Is this some clever 'security feature' in Windows 7? Why isn't Apache affected? Why are only the browsers affected?

  • Rather than connecting to localhost, what happens if you connect to your real (but local) IP address?

    eg http://192.168.1.1/ ?

    eburger : Yep that was it. Apparently the reason is this: http://serverfault.com/questions/4689/windows-7-localhost-name-resolution-is-handled-within-dns-itself-why
  • I'm with MidnighToker - is it only when you hit "localhost" or is the delay there via the local IP address as well? If it's only when you aim for 'localhost' check your hosts file for odd entries. Windows Vista and 7 also include a HOSTS entry for IPv6 ("::1 localhost").

    Since I've personally seen unused IPv6 connections slow down web browsing connections on Ubuntu in Firefox when it's enabled and not used, and read that others have seem similar in Windows, so that may be something to consider (I haven't personally encountered that problem in either Vista or Win7).

    Under Ubuntu Firefox has it's own 'don't use IPv6' setting that fixes the delay without making system-wide network changes; so perhaps Firefox for Windows also has it (dunno, I use IE in Windows):

    Here's how to disable it in Firefox, to perhaps at least start testing the theory:

    1.Type about:config in the address bar, press Enter.

    2.Find network.dns.disableIPv6 in the list.

    3.Right-click -> Toggle (you want it set to 'true' to disable)

    4.Restart your Mozilla application and try again.

    techie007 : Looks like you found this possibility on your own while I was typing. Hope it works! :)
    From techie007

Simple DNS solution for VM-based server

I run a Ubuntu server in a VM with VirtualBox on my Windows host machine. The Ubuntu hosts the webserver and is used for PHP development. However, I'm trying to figure out the easiest way to setup the DNS. I used to use the Windows Hosts file to forward domains to my localhost when I ran the webserver on the same OS. However, the VM is running bridged mode and so can have a different IP address depending on my current network. I could setup the VM to use a static IP address, but was wondering if there are any other solutions or ideas how to best tackle this software. One idea I had was to setup a BIND DNS Server on the VM but that requires hardcoding the forwarding DNS server, not an option in my case.

  • Assuming that you are using DHCP to obtain your IP address for the VM and depending on how the DHCP server is set up, you can send a hostname information to your DHCP server. If the DHCP server has been set up to update the DNS entries, the machine can be readily identified by this hostname.

    There is a send host-name "<hostname>"; option in /etc/dhcp3/dhclient.conf.

    From sybreon
  • use a static IP address on the server or set up your DHCP (router) to supply a static IP address and then use the hosts file. Its by far the simplest way.

Advice on setting up remote virtual server?

I often have contractors do testing work for me that involves them running 2 or 3 virtual machines (running, say, Server 2008). So currently this means that the contractors have to have machines powerful enough to run VMware Server, with a couple of virtual machines running simultaneously.

What I'd like to do is have a machine in my home office that is accessible from the Internet, and which runs VMware Server (or something similar). This way, contractors don't need to have powerful machines and can dial into my machine and use the VMs. If they get stuck with something, I'll have local access to the server and can help diagnose the problem.

I'm looking for suggestions as to how to what equipment to use and how to set this up so the performance is reasonable for the remote contractor. For example, can I use a somewhat powerful desktop machine (say, a Dell XPS quad-core) running VMware Server? And then have contractors connect via Logmein? Or would I need a server-class machine? (any suggestions for a reasonable price?) Would Remote Desktop be better, or maybe VNC?

I would like to keep costs down as much as possible; this is just going to be running on my home network. But if there is a compelling reason to buy more powerful gear, I'd do it.

As an alternative, are there any companies that provide virtual machines that you can lease on an hourly basis? So that you could do work, save your state, and come back to it the next day? I looked into Amazon's EC2 and that seems too expensive/complicated for what I need.

Thanks is advance for any suggestions!

  • (Aside: I don't have any affiliation with Dell-- I just like their products.)

    I just ordered a Dell PowerEdge T310 server computer to run VMware ESXi and host VMs for test software deployments. This machine w/ 8GB of RAM, fixed power supply, 3 y/r on-site warranty, and cabled hard drive configuration set me back about $800.00. (It hasn't arrived yet so I can't tell you anything about how it actually runs. The specs looked great for the price.)

    I looked at the Precision Workstation line but couldn't find a machine as inexpensive as the T310. I also looked at building a white-box machine, but found that hitting the price point of the T310 was also difficult.

    I didn't buy any storage (other than the stock 160GB drive that came in the unit) with the T310. Once it arrives I'm planning to throw a couple of 1TB SATA drives I've got laying around in it. I don't care about RAID for my purposes because I'm only using the machine for testing. If I were going to use it for anything serious I'd purchase a nice PERC SAS RAID controller and at least a couple of SAS disks, or use an iSCSI SAN. (The "S100" and "S300" "RAID controllers" are just software RAID, BTW...)

    The machine can use up to 32GB of RAM. I figure that I'll add more RAM to it later, but it'll do what I need right now.

    It sure looks like a nice little box. I hope it turns out to be.

    johnnyb10 : Thanks for the suggestion!
  • Well, the answer is "it depends".

    If you have a box capable of running an appropriate amount of VMs (the T310 computers do look nice, I've had a passing experience with them -- the only concerns I have is large slow SATA drives), and you have an Internet connection good enough that the contractors can do what they need to do, then yes you can host the VMs at your home.

    Note that most home internet connections might have very good download speeds, but will have comparatively poorer upload speeds. This is important if your contractors are trying to go interactive with their VMs. The more data they will be trying to pass back to where they actually are (database connections, large remote desktop screens with highly interactive/variable applications open), and the more of them which are going to be active at the same time, the poorer their experience will be.

    jgardner04 : David is right on the money about the internet connection. That will probably be your choke point in your configuration.
  • To answer your leasing question, VMBed offers machines for rent on an hourly basis. We have multiple wide internet connections so the remote desktop performs well.

    From VMBed

What do you do in IIS if you get an error when trying to start your default web site and it says that it cannot be started because another web site may be using the same port?

What do you do in IIS if you get an error when trying to start your default web site and it says that it cannot be started because another web site may be using the same port?

For some reason I have IIS 6 and IIS 7 on the same machine. I seem to be unable to start IIS 7 but it seems I can in IIS 6. Yet, I am able to see c:\inetpub\wwwroot\iisstart.htm but I am not able to see a viable http://localhost

  • You need to stop one of them, or set the other to listen to a different port.

    How to set the port for IIS Services

    From Pekka
  • It sounds like your setup is borked. On the other hand, Skype, among other programs grabs port 80 by default (or it certainly used to).

    Try netstat -abn for more info about what is using the port.

    From spender
  • Pekka has one solution but another solution to this problem which I think is more common is to use HTTP host headers. Host headers allow multiple sites to be hosted on one port (commonly 80 for http) using one public IP address. If you are interested in how they work you should check out google for HTTP header design and IIS host headers. I've provided a link to get you started how ever for the short term.

    Here is a link that shows how to set them up in IIS 6.

    http://www.visualwin.com/host-header/

    And another

    http://support.microsoft.com/kb/190008

    Cheers,

    Mike

  • trying netsat-abn produced a lot of output. What exactly am I looking for and what am I supposed to do once I find it?

    samsmith : Look for "80" in the netstat output. Best is to pipe netstat to a text file, then search the text file. You are looking for the app that has 80 hooked.

Set 802.1Q tagged port on VLAN1 on Dell PowerConnect switch

I'm having big troubles when adding this Dell switch to my network.

Here we use several VLANs to segment traffic. All switches (3com and DLink mostly) have configured the same VLANs, most ports are 'untagged' and belong to a single VLAN, except for the ports used to join together the switches (in a star topology), these ports belong to all VLANs and use 802.1Q tags. So far, it works really well.

But on this new switch (a Dell PowerConnect 5448), the settings are very different (and confusing). I have configured the same VLANs, an the uplink ports are set in 'general' mode (supposed to be fully 802.1Q compliant), I can set the VLAN membership as 'T' on these ports for all VLANs except VLAN 1. It always stay as 'U' on VLAN 1.

Any ideas?

  • Uhg, I hate those switches (well the whole power connect line really). I'm thinking back a bit because we pulled all but a few of those switches out of our infrastructure and don't use VLAN1 (Which is really bad practice btw)

    So this is what I remember about those switches, and seems to be confirmed by the current config on one of my remaining 5224's:

    The dell switches need a "native vlan" even on trunked ports. This VLAN must be untagged. I left it at VLAN 1 as we only use it as a parking vlan for shutdown ports on our switches. Our config looks like (sorry syntax might be slightly different on the newer models but this should get you started):

    interface ethernet 1/7
     description XOVER to <core_switch> 3/3
     no negotiation
     switchport allowed vlan add 1 untagged
     switchport native vlan 1
     switchport mode trunk
     switchport allowed vlan add 1,3-10 tagged
    

    what you would want to do is pick a VLAN you don't use and assign it as a native VLAN so that you config looks something like:

    interface ethernet 1/7
     description XOVER to CORE 3/3
     no negotiation
     switchport allowed vlan add 666 untagged
     switchport native vlan 666
     switchport mode trunk
     switchport allowed vlan add 3-10, 666 tagged
    

    You'll notice the native VLAN is both tagged and untagged - I'm pretty sure it ignores the "tagged" part of the configuration for the native vlan is ignored.

    I hope that at least gets you going in the right direction.

    Evan Anderson : +1 - Except that I don't hate the PowerConnect switches, that all jibes with my experience with them. They have a quirky configuration (and quirkly layer 3 functionality), but in general they've done what I want (and are *VASTLY* cheaper than Cisco's gold-plated comparable offerings).
    Javier : great, it seems the 'trunk' mode is what i really want. i managed to change the native vlan, but still doesn't let me add VLAN1 as untagged. it complains that "vlan not created by user"
    Javier : sorry, of course i meant "still doesn't let me add VLAN1 as tagged"
    David Mackintosh : +1 content, also no hate for PowerConnect.
    From Zypher
  • Did you find any solution to this issue? I have the same problem for a week and I'm thinking in changing all my VLANs configuration (and 3com switch configuration) for this issue. I'm looking for another easy solution (could be burn my powerconnect switches?).

    From Txus

Does using a single cable to connect two switches create a bottleneck?

I realise this may be a stupid question for some, but it's something I've always wondered about.

Let's say we have two gigabit switches and all of the devices on the network are also gigabit.

If 10 computers connected to switch A need to transfer large amounts of data to a server on Switch B (at the same time), is the maximum transfer speed of each connection limited by the bandwidth of the connection between the two switches?

In other words, would each computer only be able to transfer at a speed of one gigabit divided by the 10 machines trying to use the "bridge" between switches?

If so, are there any workarounds so that every device can use it's maximum speed from point to point?

  • This is a possible bottleneck. Some switches will allow you to agregate bandwidth with multiple ports so 3X 1gbps or 4X1Gbps. The switch OS will have a method for doing this and it varies from switch to switch as each vendor has their own way of doing this. Sometimes different names for this feature as well. Check the manuals for your make and model to see if this is supported.

    From Dave M
  • The answer is yes.

    Possible workarounds include using multiple gigabit links between the switches or faster link between the switches. Both options require support from the switches, and with aggregating multiple links it can be problematic to get the load divided evently between the links.

    From af
  • If you use one of the 1Gb/s ports to link the two switches then yes, the total bandwidth available will be 1Gb/10 + some overhead. so your throughput will be around 0.8 Gb/s in total.

    If your switches support it, you can use a stacking module. This usually allow a much higher throughput rate at almost the speed of the switch backplane.

    If your switch supports it you can also use link aggregation.

    There is however another issue here as well, if your server is connected on a 1Gb port, it does not matter whether you stack the switches using another method as your server will only be able to transfer/receive data at 1Gb/s.

    Your best option would be to use a stacking module for your switches and put your server on a 10Gb link. This also assumes that your server will be able to handle that amount of data. Typical server RAID setups will only support sustained throughputs of around 700Mb/s over an extended period of time.

  • If you are using managed switches (ones you can log into in some way) then perhaps you can combine multiple switch ports to get more bandwidth.

    Many off the shelf gigabit switches will have no restrictions between ports on the same switch. That is, if you have 10 switch ports, all of them can be in use at full speed without any problems.

    If you use one of those ports to connect to another switch, then yes, communication between those two switches is slowed down. However, the computers which share a single switch won't slow down, only when the traffic crosses that single inter-switch cable will people begin to fight for bandwidth.

    If you find that too limiting, you will have to use a managed switch on both ends, and aggregate switch ports together to get 2, 3, 4, whatever speed you need. Or, buy a very high end switch and use 10-gig between the switches. Chances are combining many 1 gig ports together will be cheaper.

  • If, and only IF, both switches support a lag/trunk connection of multiple ports to create a single width connection, you can then connect from 2 to the maximum allowed number of ports to create link aggregation.

    Warning, you don't just connection cables and you're set to go! You need to configure the ports on both sides and only then connect them, otherwise you risk a sure broadcast storm that can bring down both of your switches.

    From thedp
  • short answer: yes, it can be a bottleneck

    slightly better answer: try port trunking to add more links between switches.

    more personal answer:... it's quite likely that you won't need it. It depends a lot on the kind of work done by your users; but it's very seldom that you have many users pushing data around 100% of the time. More likely, each link will be idle like 95% of the time, which would mean that that link shared by 10 users would be idle around 50% of the time, and two users actively sharing it only 1.8% of the time.

    joeqwerty : +1. Good answer. In theory: Yes it could be a bottleneck. Reality: It probably isn't and probably won't become a bottleneck. Before rushing to make changes, set up link aggregation, etc., etc. You should monitor and measure the utilization of the link between the 2 switches.
    Evan Anderson : I take a little bit of issue with the phrase "it can be a bottleneck." It *is* a bottleneck. Whether or not it's creating a problem is an orthangonal concern. On any modern gigabit Ethernet switch the fabric exceeds 1Gbps, so by definition cascading gigabit switches with crossover cables creates bottlenecks.
    Javier : @Evan Anderson: yes, i see your point... but is it the worse bottleneck? and can it be called a bottleneck when it's still much wider than what you push through it?
    joeqwerty : @Evan: I see your point. Is it a bottleneck? Yes. Is it creating performance problems? That can only be determined through monitoring and measurement.
    From Javier
  • Yes. Using single cables to "cascade" multiple Ethernet switches together does create bottlenecks. Whether or not those bottlenecks are actually causing poor performance, however, can only be determined by monitoring the traffic on those links. (You really should be monitoring your per-port traffic statistics. This is yet one more reason why that's a good idea.)

    An Ethernet switch has a limited, but typically very large, internal bandwidth to perform its work within. This is referred to as the switching fabric bandwidth and can be quite large, today, on even very low-end gigabit Ethernet switches (a Dell PowerConnect 6248, for example, has a 184 Gbps switching fabric). Keeping traffic flowing between ports on the same switch typically means (with modern 24 and 48 port Ethernet switches) that the switch itself will not "block" frames flowing at full wire speed between connected devices.

    Invariably, though, you'll need more ports than a single switch can provide.

    When you cascade (or, as some would say, "heap") switches with crossover cables you're not extending the switching fabric from the switches into each other. You're certainly connecting the switches, and traffic will flow, but only at the bandwidth provided by the ports connecting the switches. If there's more traffic that needs to flow from one switch to another than the single connection cable can support frames will be dropped.

    Stacking connectors are typically used to provide higher speed switch-to-switch interconnects. In this way you can connect multiple switches with a much less restrictive switch-to-switch bandwidth limitatation. (Using the Dell PowerConnect 6200 series again as an example, their stack connections are limited in length to under .5 meters, but operate at 40Gbps). This still doesn't extend the switching fabric, but it typically offers vastly improved performance as compared to a single cascaded connection between switches.

    There were some switches (Intel 500 Series 10/100 switches come to mind) that actually extended the switching fabric between switches via stack connectors, but I don't know of any that have such a capability today.

    One option that other posters have mentioned is using link aggregation mechanisms to "bond" multiple ports together. This uses more ports on each switch, but can increase switch-to-switch bandwidth. Beware that different link aggregation protocols use different algorithms to "balance" traffic across the links in the aggregation group, and you need to monitor the traffic counters on the individual interfaces in the aggregation group to insure that balancing is really occurring. (Typically some kind of hash of the source / destination addresses is used to achieve a "balancing" effect. This is done so that Ethernet frames arrive in the same order since frames between a single source and destination will always move across the same interfaces, and has the added benefit of not requiring queuing or monitoring of traffic flows on the aggregation group member ports.)

    All of this concern about port-to-port switching bandwidth is one argument for using chassis-based switches. All the linecards in, for example, a Cisco Catalyst 6513 switch, share the same switching fabric (though some line cards may, themselves, have an independent fabric). You can jam a lot of ports into that chassis and get more port-to-port bandwidth than you could in a cascaded or even stacked discrete switch configuration.

Virtual Box and Multiple IPs

I've got several VMs set up here, all running linux (mostly centos but 1 ubuntu). I connect to them over SSH, but it's annoying having to go back to the VM server to check on IP addresses. Also, on reboot they usually get a different IP due to the network setup here, which I have no control over.

What's the easiest way for me to track all the IPs that my VMs are using?

  • Static IP's and a text editor

    Jonathan Haddad : I don't have the option to use static IPs, as I mentioned above.
    Chopper3 : Ah, sorry, kind of misread/musunderstood that.
    From Chopper3
  • Setup a cronjob that makes a wget to a .php file that tells the hostname of the machine and the ip and you done.

    David Mackintosh : I would only get it at boot time, but yeah, good idea.
    From adam
  • A lot of your options depend on your exact network configuration. Most of the easy options involve some network modification, which you indicate is not the case. Here are some options given that constraint:

    • If you use a Microsoft DHCP server integrated with Active Directory, then you may be able to register the hostname with the Active Directory, and inherit a DNS name. Check out http://www.centos.org/docs/5/html/Deployment%5FGuide-en-US/s1-dhcp-configuring-client.html I believe that DHCP_HOSTNAME needs to be set. I've never actually done this, so there may be additional steps to make this work within a Microsoft context (perhaps having all the machines joining the domain, etc).
    • If your looking for a pure Linux solution, you may want to look at the avahi daemons. It uses multicast DNS to push hostnames and services on a local network. Assuming avahi is running on all your boxes, you should be able to edit /etc/nsswitch.conf, verify that mdns4 or mdns4_minimal is included on the hosts line, and should be able to resolve the hostname for all your servers directly. Again, I've not set this up in a production environment, so there might be other configuration that is required.

    Both of these are pretty messy, so your much better off working with whomever manages your network to get static IPs and DNS names, as Chopper3 mentions. You might even be able to get away with static DHCP assignments, and then modify the hosts files, but again requires cooperation of your network team.

    womble : +1 to everything said here. Absent any possibility of cooperation from your netops team, I'd say that mDNS is probably going to be your least-worst option.
    Jonathan Haddad : I wish I could mark more than 1 item as accepted answer - this is a great solution... both this and the NAT solution work.
    From SteveM
  • Is it a requirement that you use bridge mode, versus NAT mode? Cause of course if you have NAT enabled, then you could forward various different ports for your ssh access, and the IP (and of course port) would remain constant through reboots.

    You can find details on this in the VirtualBox 3.1.2 user manual, section #6.3.1 "Configuring port forwarding with NAT".

    Another thought: I cannot help but notice that my IP address shows up in my virtual machine's XML file:

        <GuestProperty
            name="/VirtualBox/GuestInfo/Net/0/V4/IP"
            value="10.0.2.15" timestamp="1261587087753023000"
            flags=""/>
    

    (On linux, you can find this in ~/.VirtualBox/Machines/MyVMName/MyVMName.xml)

    So especially if you are using vboxmanage to start/stop your VMs, you could grep the machine's xml and update "something" -- like a web page, or a text file? -- so you'd always know the IP address of each VM. I'm guessing this will work regardless of whether you're using NAT or bridged networking, though I haven't personally tried it...I have a static IP in my VM. I don't know who/when/where this value gets updated in the XML file. Presumably VBox updates it when it detects a DHCP reply it needs to redirect to the VM.

    From Stéphane
  • When the machines get an IP address via DHCP on the network, do they not get DNS as well? eg. you should be able to connect to them via their hostname, which would stay static.

    You could run all the virtual machines within their own "Virtual Network" and run your own DHCP/DNS/router which connects to the real network, but you'd still need your network admin to add route/dns forwarding which he might not like.

    The other way of doing it, and which would leave you in some control, would be to cheat and use one of those external dns services. eg:

    dyndns.org is a free service that lets you have a static hostname that gets mapped to a dynamic IP address. They also have a tonne of clients for various OS's to keep this upto date. Usually, you'd have this set to an external address (eg your home router) so you can just remember myhouse.dyndns.org rather than 87.42.80.0 or whatever its changed to recently.

    In your situation, I would do similar for each of your virtual machines. Set yourself up an account and get a hostname for each machine. Install the client tell them to keep updates. So you end up with myvirt1.dyndns.org and it points to 192.168.x.x or whatever IP you've just been given.

    -saying that, could you not just ask your network admin for a block of static address's?

Sendmail delivering locally instead of to MTA in MX record

Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc.

My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console).

So from my web server, if I send an email to me@mydomain.com, it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to me@websrv1.mydomain.com it's delivered locally which is fine.

1) Doing an nslookup shows the mx record is correct.
2) Running /mx mydomain.com from sendmail -bt returns the correct result.
3) Running sendmail -bv me@mydomain.com returns:

 sudo sendmail -bv me@mydomain.com
 me@mydomain.com... deliverable: mailer esmtp, host mydomain.com., user me@mydomain.com

4) Running 3,0 me@mydomain.com, returns:

    3,0 me@mydomain.com
    canonify           input: me @ mydomain . com
    Canonify2          input: me 
    Canonify2        returns: me 
    canonify         returns: me 
    parse              input: me 
    Parse0             input: me 
    Parse0           returns: me 
    Parse1             input: me 
    MailerToTriple     input:  me 
    MailerToTriple   returns: me 
    Parse1           returns: $# esmtp $@ mydomain . com . $: me 
    parse            returns: $# esmtp $@ mydomain . com . $: me 

So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

  • Have you looked at your maillog logfile? There might be some information there that can help you troubleshoot the problem.

    Another test you can do is to send email as a user on that machine to an account at your @domain.com and then see whether it is actually being deilvered by sendmail by looking at your maillog logfile.

    I do not have an ubuntu server I can access but the maillog file should be /var/log/maillog

    Dennis Williamson : It's probably /var/log/mail.log (with a dot) - and mail.warn and mail.err
    Dennis Williamson : ...but that may just be postfix.
    CreativeNotice : Ok, thanks for the notes on the log file. Using that I was able to confirm the emails were being sent; the third party was blocking them because of a Spamhause.org PBL listing. All fixed up now.
  • Check to ensure that sendmail is not configured to handle the local domain. Strange vanishing acts can occur if it tries to handle the email locally, but it bounces, but the bounce also bounces.

    Must you use sendmail? I've replaced everything with Postfix. It's much easier to handle, IMHO.

  • If you have a smarthost line in your sendmail.mc, is it in brackets?

    define(SMART_HOST',[smtp.thirdparty.net]')dnl

    that will cause sendmail to skip MX record lookup and use the A record directly. That's probably what you want in this case.

  • What I did to disable local delivery. I'll be using the example.com domain.

    Requirements:

    • example.com A entry pointing to IP address assigned to one of the eth interfaces.
    • /etc/hosts defining example.com assigned to the very same IP address as above
    • example.com MX records pointing to Google servers (ASPMX.L.GOOGLE.COM, etc)
    • default sendmail installation (mine was on Ubuntu)

    Steps:

    vim /etc/mail/sendmail.mc
    

    at the end:

    define(`MAIL_HUB', `example.com.')dnl
    define(`LOCAL_RELAY', `example.com.')dnl
    

    and then:

    sendmailconfig
    service sendmail restart
    

    testing:

    echo -e "To: user@example.com\nSubject: Test\nTest\n" | sendmail -bm -t -v
    echo -e "To: user\nSubject: Test\nTest\n" | sendmail -bm -t -v
    

    You should see it connecting to the google server and then you should see your mail being delivered to your Google inbox.

  • If you're using postfix:

    1. Check your configuration: postconf | grep "^\(mydestination\|mydomain\|myhostname\)"
    2. If your mydestination includes the domain on which you've setup Google Apps:
    3. sudo vi /etc/postfix/main.cf and check the configuration, then save.
    4. Check configuration (like step 1) and restart postfix sudo service postfix restart (optional?)
    From Wernight

Setting up FTP on brand new EC2 Instance

I'm totally new to EC2 and only familiar with some basic Linux commands. I need to get a new Fedora 8 EC2 instance up to retrieve some data that was on a bad server; I have the data mounted via an EBS volume and I'm trying to FTP to the server now to download them. This is a base install of Fedora 8 using the "LAMP Web Starter (AMI Id: ami-2cb05345)" Instance provided by Amazon.

I have a user account created already, and I installed VSFTPD which is running. However, when I try to connect with FileZilla, I am unable to connect. The old server was using Secure FTP but I did not configure it and don't know what it was using to handle FTP (I googled for "Linux ftp" and found VSFTPD).

I'm primarily a Windows guy so I don't know how to configure this.. can anyone help so I can get these files downloaded??

  • Linux boxes use SSHd for both SSH and sFTP (secure FTP).

    Download a copy of WinSCP to your windows computer, and then use your SSH details to log into the remote server and download all your stuff.

    FTP IS BAD! -passwords and data are all transmitted in clear text :(

    Hope this helps.

    Wayne M : The problem though is we have offshore devs who need to FTP code there, should they be using SCP too?
    MidnighToker : personally I have everyone using SCP. hehe, even Dreamweaver has been supporting it for a few years now :) Frankly, if your devs are desperate to use a unsecured (and severely out of date) protocol for data transfer, I'd be taking a good long look at their code ;)
  • Avoid using ftp if you can. It's an old protocol, not secure and not firewall friendly.

    First try using scp or sftp. It's available by default if you have sshd installed. Another option is to create a tarball with the files you need and put it on Amazon S3.

    If you really must use ftp for some reason you will have to open a few TCP ports on your ec2 security group for passive mode to work. Using vsftpd for instance you have to set pasv_min_port and pasv_max_port in vsftpd.conf and open the corresponding ports.

Offloading features of SBS 2008 Standard to member servers?

What I'd like to do is run SBS 2008 Standard on a machine with several features either disabled or simply not used. Basically I'd just like the box to be a domain controller and Exchange server, nothing else.

Then I'd like to use a Windows Server 2008 standard system as an "administrative machine", which would run things like WSUS, our anti-virus system, monitoring, etc.

Finally, I'd like to have a dedicated machine for WSS (low traffic) and file serving.

My question is: is it possible to disable or remove features like WSUS from SBS without it throwing a big cow? The only reason I want SBS is to save on Exchange costs. SBS Premium is interesting, but I don't need SQL Server and it's price is quite high. Windows Essentials is also interesting, but is also not terribly affordable. I can get SBS Standard + CALs for a reasonable price, and I can add several 2008 Servers before I get close to the SBS Premium / Essentials cost.

My goal is to reduce the maintenance disaster that always results when using SBS to "do everything".

  • You can remove WSUS from SBS through the progams and features option in control panel. Doing so has not had an detrimental effects when I have done it, and WSUS has worked fine running on another machine.

    I would expect WSS should be similar, however I've not done that before.

    From Sam Cogan

Tool for finding Domain Names from a list of suggestions

I would like to create domain names for 3 websites. For each of the 3 websites I have written down 20-30 ideas.

Is there a website I can go to to see if the .com domain for the ideas I have are available or taken? Manually entering 1 at a time takes a while.

Thanks.

setting multiple swapfile datastores on a host question

hello,

I'm trying to automate -vmswapdatastore location on my cluster and so far I managed to add single datastore that can be used. Problem is i can't figure out how to add multiple datastores that are allocated to swapfile datstores on my cluster.

here is my code:

connect-viserver vcenter

foreach ($vmhost in get-cluster "clustername" | get-vmhost)

{ Get-vmhost $vmhost | set-vmhost -vmswapfiledatastore "VS01" }

this works perfectly but I can't add more of them and if i try to add another one it just replaces the current one with new one. so how would i go about adding vs02 and vs03 to the datstores?

What is the trick to doing this?

thanks

  • rather than { Get-vmhost $vmhost | set-vmhost -vmswapfiledatastore "VS01" }, could you not use different data store names, to make it easier to script?

    eg: { Get-vmhost $vmhost | set-vmhost -vmswapfiledatastore "VS-$vmhost" }

    saves you having to bother counting how many machines are running, or which ones is using what datastore.

    I'm probably completely off the mark and missed the point, but...

Error setting up Blackberry Internet Service with Outlook Web Access

I'm trying to set up Blackberry Internet Service to connect to our Windows SBS2003 outlook web access. I've tried every possible combination of credentials by I always get the same error:

An error occured during email account validation. Please check your information and try again. If the error persists please contact your System Administrator.

The fields are the following:

Outlook Web Access URL: http://mail.domain.com/exchange (I've also tried just using the IP address http://000.000.000.000/exchange with no effect).

User Name: JohnDoe (same as OWA login / domain username - I've also tried DOMAIN\JohnDoe)

Email Address: john.doe@domain.com

Mailbox Name: This one confused me a little bit, but it seems it should be the same as the domain username (eg. JohnDoe). I've also tried DOMAIN\JohnDoe, and a number of other things.

No matter what I do, I get the same error message. At this point, I'm basically just trying things, because I don't really know how this service is supposed to work. Does anyone know what causes this particularly vague error message, and what I can change either in my email settings or on our exchange server to resolve this?

  • The Mailbox Name liekly refers to the Exchange Alias. This often is the same as the user name, but not always. To be precise, if you look at the users properties, the alias is often the same as the Pre 2000 logon name. So check the users account in AD, as the pre 2000 logon name may be different.

    If that doesn't work, check the alias. To do so, follow these instructions.

    From Sam Cogan
  • All you want is "mail.domain.com" .. the mail server, not the web access.

    From Trevoke

Sql Server - Error attaching mdf file encrypted via Encrypted File System (EFS)

I am getting an error trying to attach a database whose files were previously encrypted via EFS. The actuall error message is

Msg 5120, Level 16, State 101, Line 9
Unable to open the physical file "C:\test.mdf". Operating system error 5: "5(Access is denied.)".

If I decrypt the files, then I successfully can attach the database. I am using the same local user account that runs the sqlserver service. Any ideas? (I had previously asked this q in stackoverflow, but I got comments that it belongs here)

  • Are you sure that the service account your using for SQL Server matches the user account your using to encrypt the files? EFS is handled at the OS and is transparent to SQL Server.

    If this is the case, then check for general NTFS permissions, make sure that the service account has full control of the files your trying to attach. May be stating the obvious, but you should always check that there's gas in the tank, don't assume!

    If your using SQL Server 2008 Enterprise, then you should investigate using Transparent Data Encryption (TDE) instead of EFS, there's better performance and its nicer to manage.

    Jangwenyi : Yes I can confirm I am using the same user account. It is worth mentioning that I am using sql express 2005 on Win2000 Professionalsp4
    Nick Kavadias : are you attaching the database as sa? log into SSMS with the service account or as sa & try the attach
    Jangwenyi : For the benefit of the community, I found out the cause of this issue as follows: 1. Detaching/Attaching the database uses the currently logged on user. 2. Encrypting/Decrypting db files uses the sql server service account 3. So if the currently logged on user is a different account to that of the sql server service, clearly there will be an error trying to attach the database because the files were encrypted via account (EFS works like that) 4. To resolve, use the same account to encrypt and attach
  • Check the NTFS permissions (as Nick said) and ensure that the SQL Server has access to the files using the account that the service is running under, as well as the account you are using to connect to the SQL Server.

    When you detach a database the SQL Server automatically removes the rights to the files from everyone except the person who detached the database.

    Nick Kavadias : +1 for agreeing with me ;-)
    From mrdenny
  • Don't use EFS with SQL Server. If you must use an OS level encryption, then use BitLocker. Otherwise, use SQL Server's own TDE. EFS with SQL Server performs horibly.

  • EFS also doesn't do a real encryption of the file when the file is in use. It will only encrypt the physical data after the file stream is closed which means you have to shut down SQL server to encrypt the file. The file is decrypted when it is access and is stored in both the system cache, and memory unencrypted. Just something to keep in mind.

    It's sounds to me that it's more of a permission problem and not related to EFS. Have you tried moving the file to the SQL Server data directory folder and see if it can be accessed on the server there? We use that alot when we are testing permission issues. If the server can attach it from there we know the original locations has a permission problem.

    From Phillip