Wednesday, January 26, 2011

Two users using the same same user profile while not in a domain.

I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account.

However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

EDIT- I already have utility for scripting the cleanup, I just would rather have the extra folders not created if possible. It feels like the "correct" way of doing it.

  • I would use delprof.exe and set it as a system task. You can set it to a number of days that the profile is inactive and it will delete it for you. Read more here

    http://support.microsoft.com/kb/315411

    Scott Chamberlain : That may be by backup choice if I can not get this to work. But What I would really like to know is can I make all users use one static single profile.
    From Nixphoe
  • Why not use one Demo account or a set of demo accounts. Reset the password and profile instead of creating a new one.

  • Its called a "Mandatory Profile": http://support.microsoft.com/kb/307800

    Apply whatever settings you want to the profile, rename the .dat to .man, then, when creating accounts in future (or modify existing), set the "Profile" field in the accounts properties to that profile's directory.

    (The link above shows how to set this path)

    From Grizly

How do I set the password that contains quotation mark to a connection string in an ASP.NET application?

Hi,

How do I set the password that contains quotation mark to a connection string in an ASP.NET application?

Answer:

Password: This"Is"Password

In connection string: This""Is""Password

Duplicate free/busy or full calendar information between two exchange servers

We have acquired a new entity with their own AD/Exchange environment. Until they fully join our domain, we need to sync their exchange calendars or minimally free/busy for users. All users in the acquisition have a new account in our organization.

We have considered a number of options including CalDAV, but nothing seems to meet our needs.

Even a client solutions to update the NEW server with the free/busy would be workable.

IP KVM switch, or serial console box for remote admin?

We have a small server farm (11 now, may add more in the future) of HP Proliant DL160 G6s. They all run either Linux (server only, no X11) or VMware ESX. We had intended to get models with iLO, in case BIOS-level remote admin became an issue, but that didn't happen.

I had an IP KVM switch recommended to me (along with some sort of Remote Reboot hardware.) I've since realized that none of our machines need GUI administration, so perhaps a serial console switch would be a cheaper and more appropriate option. Something like this: http://www.kvm-switches-online.com/serimux-cs-32.html

Do you folks have an opinion on which way is a better choice? Should we go for the ease of setup (plug and go, instead of turning on the feature in the BIOS and making sure the serial settings are correct) and the flexibility of an IP KVM switch even with the extra cost? Or is a serial console switch just fine?

  • If an IP-KVM solution is in your budget, I'd go for that, particularly if you have the ability to remotely trigger power cycles and the like.

    Can I recommend getting the type that doesn't use thick KVM cables, but instead uses CAT-5 or the like? The big cables are a PITA to run and deal with, where as we're all pretty much used to CAT-5 by now.

    If you go that route, keep yourself sane by color-coding the cables so that your KVM lines don't resemble your network connections.

    Serial cables do work, but the ones I've used are susceptible to interference, plus the BIOS has to support redirect-to-serial (many do now, but there's always an outlier).

    Ignacio Vazquez-Abrams : They're all Linux, which means just spawning an instance of mingetty on ttyS0.
    Matt Simmons : Don't underestimate the power of remote bios administration. How else do you rebuild a RAID array remotely?
    Allan Anderson : Yes, these machines are all on RAIDs. From looking at HP's docs, I believe I can set redirect to serial in these machines' BIOS. Still, if we need to rebuild the RAID array, I guess we need to get into the room anyhow to reinstall the OS or hypervisor, eh? I know that this will be rarely used, but I'd like to make it as easy as possible for everyone in my group to use.
    Matt Simmons : grahzny: Yes, reinstalling an OS requires local access (except in the case of some remote access boxes), but there are lots of BIOS settings I've been grateful for being able to access without trudging into our colo
    Allan Anderson : Now something like this looks fancy! http://www.ami.com/products/product.cfm?CatID=4&SubID=32&prodid=196
  • I would recommend IP-KVM, especially the better models with virtual media (which means you can connect a virtual CD drive which consists of an .iso file reachable over the network). If you also have remote power control, you can do absolutely everything remotely which doesn't require direct physical access to the server (replacing or adding parts), including the installation of a hypervisor.

    From SvenW

Building a Data Warehouse

I've seen tutorials articles and posts on how to build datawarehouses with star and snowflakes schemas, denormalization of OLTP databases fact and dimension tables and so on.

Also seen comments like:

Star schemas are for datamarts, at best. There is absolutely no way a true enterprise data warehouse could be represented in a star schema, or snowflake either.

I want to create a database that will server for reporting services and maybe (if that isn't enough) install analisys services and extract reports and data from cubes.

My question was : Is it really necesarry to redesign my current database and follow the star/snowflake schemas with fact and dimension tables ?

Thank you

  • It pretty much is, unless you dump the whole SQL side and build the repository in a Cube - in which case you MAY get away with an OLTP schema underlying the data.

    The main problem is that a non-star-schema approach simply puts a lot of burden on the server for analysis. That said, the idea to sue analysis services is terrific - they shine in this area. Just try whether you can directly load them from... the OLTP schema, possibly a snapshot of that.

    Paul : If you give me a good resource (tutorial) on how to build and make best use of OLAP you've got yourself a winning answer :)
    TomTom : Sorry, no online resource I know of. This is something requiring experience.
    From TomTom
  • Another part of the rationale of the data warehouse is that any computations to massage or transform data are done prior to loading it into a particular schema so that much what is pulled from a data warehouse is "ready to use".

    From jl
  • I'd recommend a good book on the subject: http://www.amazon.co.uk/Microsoft-Data-Warehouse-Toolkit-Intelligence/dp/0471267155/ref=sr_1_3?ie=UTF8&s=books&qid=1272019644&sr=8-3

    Although it is targeted at 2005 (2008 version I think is in the pipeline) the general theory holds well, and the design and planning steps are almost platform independent anyway.

    Worth it's weight in gold if you are looking to get into DW :)

    From Meff
  • There are few things I would look at before redesigning your database.

    1. I'm pretty sure that reporting services doesn't need a star/snowflake to do its work so you might see what you can build with your nomalized database.
    2. Try building views that denormalize your OLTP data. It will get you thinking about the design aspects that you will need if you do redesign your database.
    From CTKeane

esx backup usb disk

my server has a disk error, unfortunately RAID-0. So i am planning to boot it off CD (partedmagic) and copy the VMs to a USB disk. File system is VMFS (esxi4) once the damaged disk is replaced back could the data be restored back?

this server has two datastores this bad disk belongs to store-1. please suggest any better ways or tools. thanks in advance

  • RAID 0? groan

    The best mnemonic for RAID-0 is that it provides 0 protections against data loss. If your server had a disk error, I would be very, very surprised if you were able to recover any data.

    If your disk isn't completely dead, and you do manage to get data off of it, replace the failing drive and rebuild the array using RAID 1, which will take the two drives and mirror them, so that if one fails, you can rebuild it.

    Do you have any other forms of backup? I suspect you'll have to use them, or rebuild the entire thing from scratch. Next time, don't use RAID-0.

    maruti : RAID-1? pls suggest a less expensive option in terms of disk cost
    Holocryptic : +1 If you use RAID-0, expect to lose your data. It should not under any circumstance be used for important production equipment, unless you're a developer running builds. A pretty standard config would be RAID-1 for the OS volume (ESX) and RAID-5 or -10 for the volume holding the virtual machines.
    Holocryptic : Also, I don't see how RAID-1 would be more expensive than -0. You'd still be using 2 disks, and the ESX disk overhead is minimal.
    Matt Simmons : Less expensive in terms of disk cost? get another disk and use RAID-5.
    maruti : configure all as either RAID-0 or RAID-5? the server has 8 disks - 300GB 10K SAS...
    Matt Simmons : NOT RAID 0! RAID-5 gives the sum disk space of the entire array minus the space of one of the drives. Therefore, you'll have 2.1TB of usable space: (8 disks * 300GB) - one parity drive = 2100GB of space. If you want to be up on the "best practices", use RAID6, which provides 2 parity drives, but with only 300GB/drive, that shouldn't be necessary.
    maruti : thanks, so for people planning RAID choosing smaller capacity disks makes more sense than 3.5 inch large ones? i mean 2.5 inch 300GB Vs 3.5 inch 600GB.
    Matt Simmons : when it comes to that, it depends very much on the performance of the drive and the intended usage. Ask another question, and we can discuss it there
  • RAID-0 aside, yes you can restore the data back, assuming you recover it from the bad disk. When you have your disk arrays set up again (in a hopefully more fault tolerant setup), you can copy the data files back over. It's just a matter of adding the VMs to the inventory through VSphere.

    maruti : thanks, any free tools which backup esx vm?
    Holocryptic : I don't know about free, but you can look at http://www.veeam.com/. I heard from other people that they're ok.

Control which os is booted on multi-boot system

I am setting up a server with multiple operating systems for the automatic testruns of my company's product. I'd like to be able to control with a script which OS boots up after a restart, so I could say for example "after the windows run, boot into linux".

I thought of using the windows bootloader for all OSes, because it should be easy to just change the default entry in C:\boot.ini to whichever system I want to boot.

Is this a feasible way of doing this? Are there better options?

EDIT:
We already discussed virtualization, and it's not really an option.

  • In Vista/2008/7 there is no boot.ini; it's a Boot Configuration Database (BCD), and I don't think there are any linux tools for it yet (not sure).

    I might be a whole lot easier to setup virtual machines to do the testing in parallel.

    mooware : I already suggested virtualization like Xen, but these are mainly performance tests, so parallel runs are out of the question, and most of us are also opposed to running performance tests on virtual machines.
    sinping : bootcfg is the command you are looking for on the Windows side.
    Chris S : `bcdedit` is the program for editing the BCD, but that's not going to help Linux or any other OS. I'm thinking Moriarty is right on this one.
    From Chris S
  • I would create a FAT16 /boot partition in Linux, and just use GRUB, it is more flexible and supports a lot more operating systems.

    Prof. Moriarty : And, as Chris S says, VMs may be a better choice.
    mooware : Sounds good, but how do I change the default OS from windows? Do I just have to change '/boot/grub/default'?
    Prof. Moriarty : You would just change the x in the line 'default x' in D:\grub\grub.conf to point to whatever entry you need. You would have /boot as D: in Windows.

Windows Network Load Balancing on ESX Cluster with Dell PowerConnect stacks

We recently switched out our Cisco 6500 core switch for a pair of Dell PowerConnect 6248 stacks. Since then, our Network Load Balanced Sharepoint, which runs on two virtual machines on an ESX cluster has been behaving very poorly. The symptoms are that opening and saving documents stored in sharepoint takes a very very long time. There are no errors showing up on the Sharepoint servers or SQL server, just a lot of annoyed users. Initially I thought there was no way NLB could cause this, but as soon as we repointed the DNS records for our intranet to the ip address of one of the web front ends, the problems disappeared.

We suspect there is an issue related to multicast in the Dell configs - NLB is configured for multicast, but not IGMP.

Has anyone got a similar set up to us and fixed this sort of issue? Sharepoint on VMware ESX, with Dell PowerConnect switches.

  • We have seen almost the same issue. We are using NLB with multicast (but not IGMP) to load balance 14 web servers across two ESX 4 servers plugged in a pair of stacked Dell PowerConnect6248. The nlb was working but the performance was terrible. We tried chnaging everying on nlb (unicast, multicast, igmp) and vmware switch (promicous, nitify switch, etc) and could not make it work. We added multicast MAC to dell bridge and arp tables all to no effect. We eventually solved it by turning routing off the vlan on the PowerConnect (i.e. using simple layer 2 VLAN) and using an external router to route traffic. Would love to know how to using the routing on Dell to make this work as it should be supported.

    joeqwerty : The problem, IMHO, is not the switch but the NLB. We had similar problems with our TS NLB. There was so much NLB heartbeat traffic that it was smothering the rest of the network. We solved it by using 2 NIC's on each TS (which is MS best practice BTW) and connecting the NLB bound NIC to an isolated layer 3 switch and routing traffic for the backend through the second NIC, which was connected to the production network.
    From Alec

How to get the same result as modifying /etc/hosts file, without root access?

The question is simple: no root access, but need to point particular domain name to the specified IP address.

What are the other ways to do the same thing as with adding the record to /etc/hosts file?

UPDATE:

Clarification:
My domain had been expired, but I still want to gain the access to it from corporate network (no root privileges on my linux workstation) by it's domain name for: http, https, imaps, smtps, ftp, and couple of specific ports.

  • You could LD_PRELOAD in your own versions of gethostby{addr,name..} which read /etc/hosts and then the users own hosts file in their homedirectory.

    Kyle Brandt : +1 Have to remember this for the next time April 1st rolls around :-)
    From rkthkr
  • The server is using a nameserver right ? (cat /etc/resolv.conf) so you could, if you have access, configure it to return a specific IP address for a particular domain.

    apenwarr : If he had root access to change /etc/resolv.conf, then he could change /etc/hosts too.
    From Marcel

ubuntu set system proxy from command line

Using server version of 10.4 beta 2

Need to to set the proxy that the system needs to use

Thanks

  • Depending on your needs you could add

       http_proxy="http://your.proxy.here:3128/"
       https_proxy="http://your.proxy.here:3128/"
       ftp_proxy="http://your.proxy.here:3128/"
    

    to /etc/enviroment to have them set by the login-process.

    cheerio

    thecoshman : would this apply the proxy settings even if no one logs in? Ideally, I don't want to have to arse around on the machine, just to update it or what not.
  • http://studyhat.blogspot.com/2010/01/squid-proxt-server.html

    in place of yum install you can use sudo apt-get install squid then follow the blog!!!

    From Rajat

Silverlight 4 missing from Visual Studio 2010

Hi

I've just finished installing Visual Studio 2010 professional onto Vista. But don't seem to have Silverlight 4.

If I try to create a new project I can see Silverlight project templates but only seem to be able to target Silverlight 3.

Is Silverlight 4 not part of vs2010 pro by default?

I also noticed the msbuild targets is missing ie the v4.0 folder doesn't exist at the following folder: C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\

Any help/thoughts would be much appreciated.

Thanks Chris

  • Is Silverlight 4 not part of vs2010 pro by default?

    Correct. The SL4 SDK and tools is an add on (and will be available "in a few hours").

    Update: "Silverlight Tools Released"

    mouters : I should have googled a little harder this page says it all: http://www.silverlight.net/getstarted/silverlight-4/
    From Richard

ESXi 4.0 Licensing Limits

From what I understand, the software is free and you just need to register to remove the 60-day limitation. Does this mean I have to register every time I install ESXi on a new machine? Or can I use the same key for different ESXi 4.0 installations?

  • This may have changed since but the last time I registered for keys I was prompted to enter a quantity, so you should be able to obtain what you need without having to repeat the registration.

  • As John mentioned, you can obtain a key for a certain number of activations. Each time you install a new ESXi machine, you activate with the same key.

    Chris S : You are on your honour to tell them how many you're using; even if there's no specific limit.
    Matt Simmons : Sure. I just request a number of activations equivalent to the number of physical hosts on my network. I will probably convert them all eventually, anyway.

Should I change the process priority of Apache Tomcat in Windows?

I have a web app running on tomcat/Windows that is experiencing performance issues. Will setting the process priority help or will this interfere with my system stability.

alt text

Thanks.

  • This will only help if something else on the server is taking away processor cycles and you are certain you don't want that something getting the cycles.
    In many years of using Windows server, I've never increased the priority of a task for anything other than experimentation.

    Are you certain that tomcat is processor starved and not waiting on disk IO or memory swap?

    \\Greg

    joe : Thanks. I believe it is a memory issue. Is it good or bad that Tomcat has high Mem Usage? I want it to utilize all the memory it can, and not artificially contrain it. I have 2 gigs in there, as you can see it is using up 1.4 gigs. I wonder if adding 2 more gigs would help.
    alex : Adding RAM will help if either you experience GC pauses or you can increase some cache setting thanks to the bigger heap or something like that.
    uSlackr : It is not good or bad that it is using a lot of RAM. The question is whether the rest of the system is memory constrained. Looking at the number of hard page faults (where pages are being read from disk swap space) is high and sustained. I imagine there are some good articles on Technet that discuss system performance and finding the bottleneck.
    From uSlackr

Which DNS settings are used when setting up your server

I have a server and want to run my own name server service. Now I have set it up already and it works not, but I do not know where the exact settings are stored.

On my server I use Plesk. When I edit DNS settings there I think it is stored in named.conf. Named is installed on the server, and BIND.

Now I also have a panel from my registrar. This is separate from my server.

Both places I can add the normal MX,A,CNAME, etc records.

Now where is the best way to place this settings. Currently I have the same records on both places, on the server and at the registrar panel.

I am correct to just add all the records at the registrar panel, and remove everything from within PLESK, and just don't run DNS on my server, because it is already done in the registrar panel.

Or should I add the records in both places.

  • named.conf is the BIND configuration file, you will have .hosts and .rev files for your forward and reverse look up settings respectively in your

    If your registrar is also your DNS service provider, then you do not need DNS entries in both places.

    Saif Bechan : Ok thank you I will check and remove the entries from my server. I hope it does not break.
  • To move DNS you will need to access your registrar's cpanel and give your domain new authoritative namesevers (the DNS on your LAN). You should have a backup DNS someplace too.

    On your network you will need to configure your zonefile (you can google for help on writing a proper zonefile) and then make sure to open up the proper ports on your firewall to allow DNS traffic from those on the outside who want to query (TCP/UDP 53).

    Are you sure you want to do this? If you just want to run a caching DNS server, you can leave your domain DNS with whatever your are using now (assuming your registrar) and just run one locally for LAN requests.

    Saif Bechan : No I do not need to move anything. Everything is set up correctly. But now I have everything configured in bot my registrars panel and in the zone-file. But I was thinking of removing the zone-file entries.

Nameserver Checker

Hello,

I've got Bind running on a server and although access to the domains I've set up is correct. I was wondering if there was an online (or offline) tool to check if I had setup the service correctly?

Regards

Steve Griff

  • Ask a public recursive server for the records.

    For example, if I just setup some record for randomtest.example.com on my server that's authoritative for example.com,

    host -t any randomtest.example.com. [8.8.8.8][1]
    

    will tell me if it is getting out to other nameservers on the internet.

    From chris
  • Bind include a set of tools for testing configuration and zone files. The man pages of named-checkzone and named-checkconf should give you all the informations you may need.

    chris : That'll tell you if you've got the zones and server itself configured correctly, but it won't tell you if anyone on the internet will get the data, or if you've made a syntax error in the records that is "valid" but not what you mean, such as a missing trailing period.
    Benoit : @chris: indeed, it will only tell you if you have "setup" the service correctly, as asked by Steve. No one except you can tell if there is a typo in a record. Your only way to find out it to check the content of the zone files.
    From Benoit
  • ZoneCheck is available online or for download http://www.zonecheck.fr/

    From huguei
  • http://www.intodns.com is a good free online tool.

    Steve Griff : I like this. Gives an informative, easy to read report on the DNS.
    bortzmeyer : Fails on IPv6 name servers (in 2010!), warns when there is only one MX record (they don't know the difference between email and the Web?). Poor service, IMHO.
    Cristian Ciupitu : @bortzmeyer: you must admit that IPv6 isn't mainstream yet. As for the MX record, don't you think that it's normal to *warn* the user if a domain has less than 2 MX records? By the way what's the difference between email and the web and how can a DNS checking service tell the difference?
    bortzmeyer : The difference is that the Web is synchronous (you query a server, it must be available and reply immediately) while the mail is asynchronous (if the downstream MTA is not available, the upstream MTA queues and retries later). So, no, I don't agree, the vast majority of domains should have only one MX (gmail.com is of course a different case...)
    bortzmeyer : @Cristian Ciupitu Regarding IPv6, the problem is not that these services do not have IPv6 support (which would be understandable).The problem is that they FAIL when they encounter an IPv6 name server (marking the server as invalid) instead of simply ignoring it (as an IPv4-only DNS resolver would do). For instance, an option "Transport layer" of Zonecheck allows you to disable IPv6 and, in that case, IPv6 name servers are simply ignored.
    Cristian Ciupitu : @bortzmeyer: that would be a valid issue indeed, but as I've said before IPv6 is still not mainstream; I don't think that many people are affected by it.
  • Many good Web tools (and a lot of bad ones):

    From bortzmeyer
  • You can issue direct queries to your server using host/dig to test Bind configuration:

    host -t soa yourdomain.com server_ip # fetch SOA record
    host -t ns yourdomain.com server_ip  # fetch NS records
    host -t mx yourdomain.com server_ip  # fetch MX records
    host yourdomain.com                  # fetch A record
    

    or just

    host -t any yourdomain.com server_ip # fetch all
    

    I do recommend you to run tests from another machine to detect problems with firewall, interface binding, etc. If you don't want to muck with command line just use one of the online tools.

    http://www.dnsinspect.com it's an another online DNS testing tool.

    From vitalie

Microsoft DNS/DHCP using DDNS - Domain Suffix issue

I have an issue with our Microsoft DNS server, we're getting the dreaded "DNS Update Failed" in the DHCP logs.

We have two forward lookup zones, blah.com and somethingelse.com - blah.com is the one I want the workstations/DHCP to dynamically update.

However, I can only get it to work if I specify blah.com as the domain suffix in the network connection properties. I can think of two possible solutions, but have no idea how to implement them or if they're possible:

1) Designate a blah.com as the "default" zone somehow on the DNS server, so all updates are sent to that zone unless the client's domain suffix is somethingelse.com

2) Use DHCP option 15, which sets the domain suffix. - We're currently doing that, but it doesn't seem to take it into account when updating DNS.

Can anyone please shed some light? Thank you.

  • This ended up being a problem with the clients. I had to make sure they had the correct suffix and check the box "Use this suffix when updating DNS", then DHCP happily updated DNS for me.

    joeqwerty : Normally the client will register it's primary DNS suffix (computer name FQDN suffix). If it doesn't have one (if it has a single label NetBIOS name only or is in Workgroup mode) or you want to register a different suffix, you need to do as you've done: Add the suffix to the properties of the NIC and select the option to register the connection specific suffix.
    From Samuurai

Why does a pdf file download result in varying bytes logged, all with sc-status 200

I have a mojoportal CMS installation on an IIS7 server where users are reporting problems downloading a pdf file. It always downloads fine for me and most others, either displaying in browser or in Adobe Reader.

Using logparser to query the IIS logs, all the responses are status 200 (OK) or 304 (Not modified), but the bytes sent vary quite a bit. Sometimes zero, some 211, some about half the full file size of 27059, and lots in between. Plenty show the full size of 27059.

Do these other entries for smaller byte counts represent errors of some kind, correlating with the problems reported? Is this likely to be a browser/client issue or a server side problem?

If there is any other info that would be helpful let me know. This is a shared hosting server though so I am somewhat limited in what I can dig into on the server.

*edit: I have noticed that the log entries with smaller byte counts are in a series of entries for the same client IPs, so I guess the browser is doing something where it gets the file in chunks over multiple requests. Summing the bytes by client still doesn't result in consistent total bytes transfered per client.

  • I implemented the hotfix at http://support.microsoft.com/kb/979543 on a VPS server I run and it has resolved the problem. Now to see how long my hosting company takes to implement the hotfix on their shared IIS server.

    It is a tricky one to diagnose since it depends on the version of the Adobe Reader browser plugin installed on the client.

    From Pat James

Server Security

I want to run my own root server (directly accessible from the web without a hardware firewall) with debian lenny, apache2, php5, mysql, postfix MTA, sftp (based on ssh) and maybe dns server.

What measures/software would you recomend, and why, to secure this server down and minimalize the attack vector? Webapplications aside ...

This is what I have so far:

  • iptables (for gen. packet filtering)
  • fail2ban (brute force attack defense)
  • ssh (chang default, port disable root access)
  • modsecurity - is really clumsy and a pain (any alternative here?)

  • ?Sudo why should I use it? what is the advantage to normal user handling

  • thinking about greensql for mysql www.greensql.net
  • is tripwire worth looking at?
  • snort?

What am I missing? What is hot and what is not? Best practices?

I like "KISS" -> Keep it simple secure, I know it would be nice!

Thanks in advance ...

  • post this question to Serverfault

    meagar : Post this answer as a comment :p
    mahatmanich : Hey Alexander, I am new here, whats serverfault? Same services different website different login?
    Alexander : thanks, will know from now on.
    Alexander : yes, just copy your account to that service and you're all set ;)
    Buggabill : Similar service. You can use same OpenID login and copy your profile over there.
    mahatmanich : thanks gents ...
    From Alexander
  • For ssh, you can use both password and keys, but for root it is a good idea to only permit the root login using a key based auth, which is handfull (I like ssh root@host).

    Maxwell : Better not permit root logins via ssh, but prefer su/sudo.
    mahatmanich : Maxwell I am using su - to get my root access whats the difference between that and sudo?
    Bart Silverstrim : one, sudo can be configured to limit available commands that can be run. Two, sudo is logged when it's used. Three, it mitigates accidentally doing things that can damage the system, as you have to think slightly more before running privileged commands.
    mahatmanich : Thanks for the insights Bart ... I'll look into it!
    From aif

china and gmail attacks

"We have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” [source]

I don't know much about how internet works, but as long the chines gov has access to the chines internet providers servers, why do they need to hack gmail accounts? I assume that i don't understand how submitting/writing a message(from user to gmail servers) works, in order to be sent later to the other email address.

Who can tell me how submitting a message to a web form works?

  • Web access to web form can be done through https protocol, which has encrypted traffic, so simple packet dump don't give you access to message contained in this packet.

    Cristian Ciupitu : That's correct, but on the other hand HTTPS isn't immune to a man-in-the middle attack if your browser trusts an evil CA (Certification Authority). The CA can issue a certificate for `gmail.com` which then can be used for a fake Gmail server. The average user won't notice anything, unless he/she looks at the CA that signed the certificate and he/she knows it's not the right one. Firefox has a CA certificate from *China Internet Network Information Center*. I'll let others decide if this organization is evil or not.

How can I stop IIS7 (integrated mode) from reporting a 404 before I get a chance to handle it?

I have an ASP.NET MVC 2 application running on IIS7 in integrated mode. I'm trying to do my own 404 handling, but IIS7 seems to be intercepting the error and returning its own 404 message to the client before I get a chance to handle it.

I'm not having much luck coming at the problem from a programming perspective over on Stack Overflow, so I wondered if maybe it's a configuration problem. Is there something I have to do to tell IIS to let me handle my own errors? (I'm trying to use Application_Error in my global.asax but it's not even getting there).

There is a custom error page defined (at the machine level, I think) for 404 but when I tried removing that it didn't really help - it simply showed a bald one-liner message instead. My code still didn't get a look in.

Is it perhaps something to do with the routing? Maybe the "mysite.com/nosuchpage" URL isn't being routed to me and that's why I don't get a chance to intercept it? Do I need to do something so that ALL requests get routed through my app?

  • You need to change the 404 page at the site level.

    Open the IIS snap-in.

    Drill down to your site on the left side.

    Then on Error Pages (right side, second section), edit the 404 page so that it directs to your custom page.

    You should be all set then.

    Gary McGill : @Dayton: I can see how I could use that to redirect to my own page, but the problem is that I actually want it to use the error handling that I've already set up (in my Application_Error). If I do as you suggest, then all my logging, etc. would have to be replicated there. And I'd end up with different behaviour from my (Cassini) development server, which is likely to be a headache. Can I somehow tell IIS to let me handle *all* errors and not stick its oar in?
    Dayton Brown : Bummer. You may want to repost to StackOverflow. This could be better solved over there.

Oracle 11g RDBMS Enterprise Edition download?

I am looking for a URL where i can download the Enterprise Edition of Oracle 11g.

I know that i can download the Standard Edition of 11g at Oracle's technet, but as i would like to use some of those enterprise options, which are disabled in standard, i would rather reinstall the enterprise software on our development server.

  • Is it not here ?

    "Oracle Database 11g Release 2 Standard Edition, Standard Edition One, and Enterprise Edition"

    Careful of those license costs for dev servers!

    ConcernedOfTunbridgeWells : IIRC OTN developer licencing is free - has this changed?
    Oliver Michels : developer license is still free.
    Antitribu : for a single developer; when he said "our" server sounds like could be more than one user connecting and then Oracle can charge. They can be quite nasty about it too.
    From Antitribu

HTTPS Proxy - Is it possible to proxy a HTTPS request without having certificates setup?

Hi,

HTTPS Proxy - Is it possible to proxy a HTTPS request without having certificates setup?

In my case I'm still trying to workout how to do this with .NET HttpListener & HttpWebRequest. I've got it working for HTTP.

  • Will you be able to configured a proxy on the client? If a proxy is configured the client will use an HTTP CONNECT to access https sites. No certificates need to be setup on the proxy.

    If you are trying to setup an interception cache (transparent) then you pretty much have to setup certificates and get the clients to trust them.

    GregH : so the proxy I have in mind is one where I would have to manually point my applications to it (e.g. for browser set the host/port for my home grown proxy) - so this is not a transparent proxy correct?
    From Zoredache