Sunday, January 16, 2011

Cross server file transfer possibility- winserver2008 ?

Guys, I am new to this. Thus, really need your expert advice.

I want to transfer file from C:/inetpub/vhosts/domain.com/subfolderA from Server 1 in location A

into

E:/inetpub/vhosts/domain.com of server 2 in location B (not in the same network)

automatically.

Both are win server 2008.

How possible izzit?

Use vbscript call to copy all files/folders is definitely not a problem.

But how about transfer of files/folders from server 1 to server 2 in a remote location?

Zip it first? Shell? FTP? Security of the files when being move to remote location?

Any example scripts or similar reference? in vbscript or php.

  • Use whatever you are familiar to.

    People normally are using rsync.

  • Rsync is good, however an alternative would be to use Secure FTP:

    http://www.cuteftp.com/

    Which you can script via a GUI

    i need help : seems to be a good one, but its not free version.
    i need help : thanks, your secure ftp lead me to google around this keyword, and finally found my own solution, which is vbscript to invoke winscp dos command line for file transfer. thanks again!
  • I just tried with WinSCP command prompt.

    The transfer from local win xp to remote win2008 server works fine.

    But when transfer directly from remote server to remote server, it fail at the command= put stage.

    The ms dos keep on hanging after I issue a command to upload files from the original remote server to the destination remote server.

    Guys, any idea?

    i need help : ok, fixed. It hang there due to windows firewall doesn't allow program full features to be invoked. after I add winscp into firewall exception, it work fabulously.

File locks on an NFS?

Hi,

My server uses NFS (Network File System), and I'm unable to use PHP's flock() -function. Is there a way to lock files on an NFS or is there a need to do so?

  • flock() works just fine on Linux NFS, including from PHP. We use it extensively and have tested it thoroughly to verify it's working as desired. Check to see if you're running all of the necessary services on both the client and server. Look for "portmapper" and "rpc.statd". If they're not running, you need to figure out which init script starts them on your distro. On Debian-based distros it's "/etc/init.d/portmap" and "/etc/init.d/nfs-common".

    From the client, run "rpcinfo -u $NFSSERVER status" and see if you get a response. On my setup, I get "program 100024 version 1 ready and waiting" as the result.

    Oh, also bear in mind that in some circumstances NFS and statd can get upset if both the client and the server don't have reliable hostname entries for each other. Double check /etc/hosts on both machines.

    Tower : I'm not really in a position to alter server specific details. The flock() -function is even disabled from the php.ini, because it would not work, at least that's what I've been told to.
    From Insyte
  • Just wanted to answer to myself. The solution can be found here: http://us3.php.net/manual/en/function.flock.php#82521

    Insyte : The second option listed is exactly what I describe: using the built-in lock server in Linux NFS. The troubleshooting steps were designed to determine why it (apparently) wasn't working...
    From Tower
  • I don't know how the PHP flock() function is implemented, but assuming it's an interface to the flock() syscall, then it does not work at all over NFS. From the flock() manpage:

    flock(2) does not lock files over NFS. Use fcntl(2) instead: that does work over NFS, given a sufficiently recent version of Linux and a server which supports locking.

    From janneb

Ubuntu issues with IP

I'm having a really odd issue where ifconfig and my /etc/network/interfaces disagree. I have /etc/network/interfaces configured so eth0 has a static IP of 192.168.2.5; however, ifconfig says eth0's IP is 192.168.2.198 (in my DHCP range). As far of the rest of my network is concerned, the machine is located at 192.168.2.198 . I've tried restarting networking (/etc/init.d/networking restart) twice now, and that didn't resolve the issue.

/etc/network/interfaces

auto lo
iface lo inet loopback

iface ppp0 inet ppp
provider ppp0

auto ppp0

iface eth0 inet static
address 192.168.2.5
netmask 255.255.255.0
gateway 192.168.2.1

ifconfig

eth0  Link encap:Ethernet  HWaddr 00:19:b9:6d:a2:b1
      inet addr:192.168.2.198  Bcast:192.168.2.255  Mask:255.255.255.0
      inet6 addr: fe80::219:b9ff:fe6d:a2b1/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:301767 errors:0 dropped:0 overruns:0 frame:0
      TX packets:76931 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:153435880 (146.3 MB)  TX bytes:9934052 (9.4 MB)
      Interrupt:22

lo    Link encap:Local Loopback
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:16436  Metric:1
      RX packets:23150 errors:0 dropped:0 overruns:0 frame:0
      TX packets:23150 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:9998881 (9.5 MB)  TX bytes:9998881 (9.5 MB)

wlan0 Link encap:Ethernet  HWaddr 00:19:7e:60:e7:b5
      UP BROADCAST MULTICAST  MTU:1500  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
      Interrupt:16 Memory:ecffc000-ed000000
  • can you post the output of:

    cat /etc/network/interfaces
    

    and

    ifconfig
    

    edit: oops, I missed one:

    nm-tool
    

    I'm betting that NetworkManager is why your interface is pulling DHCP. Check Preferences > Network Connections.

    Shadow : See edits......
  • Below is my /etc/network/interfaces file for a static IP running Ubuntu:

    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    auto eth0
    iface eth0 inet static
    address 10.10.100.17
    netmask 255.255.255.0
    network 10.10.100.0
    broadcast 10.10.100.255
    gateway 10.10.100.1
    

    Can you confirm that you are intentionally using the point to point protocol? Also, are you wanting PPP or PPPoE? If you are needing to use PPP, would the following work?

    auto lo
    iface lo inet loopback
    
    iface ppp0 inet ppp
          provider myisp
    
    auto eth0
    iface eth0 inet static
    address 192.168.2.5
    netmask 255.255.255.0
    broadcast 192.168.2.255
    gateway 192.168.2.1
    

    In the above, you'll need to replace myisp with your specific isp info. Also, if you are using PPP, could you post the output of:

    cat /etc/ppp/options   # or any other interesting files in this directory
    cat ~/.ppprc
    
    Shadow : I'm pretty sure the ppp interface was there since I installed the OS.
  • You don't have eth0 flagged as "auto" in /etc/network/interfaces. This means restarting networking is going to ignore that interface and it will just keep whatever config it already had (apparently a DHCP assigned address).

    Try this:

    1. Run "ifconfig eth0 0 down"
    2. Edit /etc/network/interfaces and add auto eth0 above the definition of the eth0 interface.
    3. Run "ifup eth0". It should come up with the address you assigned in /etc/network/interfaces.

    You may also want to check your process table for an instance of dhclient. If it's there, kill it.

    Shadow : Thanks! That fixed it.
    From Insyte

Adding a second IP to Debian Server

I was given a second IP by my server provider. I am running Debian 5.0. I thought I knew how to add the IP to the system and configure with apache, but I have not yet been able to.

The primary IP works fine and I have a few sites already running on that one.

What steps would I take to add this second IP so that I use it in apache?

  • You are supposed to configure a new alias for your second ip.

    The mandatory resources for ip management are the file /etc/network/interfaces and the ip tool from iproute package.

    Where are things breaking? what doesn't work?

    From scyldinga
  • Assuming the new IP address is on the same subnet as the first, add a second virtual interface (sometimes called an "alias") to the primary network interface. This is configured, like all network interface settings, in /etc/network/interfaces. The Debian Reference manual has a section on the topic:

    http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#%5Fthe%5Fvirtual%5Finterface

    A simple example, assuming your primary network interface is eth0 and has an ip of 192.168.1.1 and the new ip is 192.168.1.2:

    auto eth0
    iface eth0 inet static
      address 192.168.1.1
      netmask 255.255.255.0
      gateway 192.168.1.254
    
    auto eth0:0
    iface eth0:0 inet static
      address 192.168.1.2
      netmask 255.255.255.0
    

    Once the appropriate settings have been added to /etc/network/interfaces, run ifup eth0:0 to activate the new interface.

    If, however, the new ip is on a different subnet, you need to either provision the ip on a physically distinct network interface or create a VLAN interface, depending on how your ISP is prepared to hand it off to you. That's a whole new topic.

    David : Thanks for the answer. I tried following these instructions, but when I restart the network interfaces I receive this error: :/etc# /etc/init.d/networking restart Reconfiguring network interfaces...SIOCDELRT: No such process if-up.d/mountnfs[eth0]: waiting for interface eth0:0 before doing NFS mounts (warning). done. The subnet is the same for the ips, so from what I know and read this above solution should work. The virtual host that I setup are not working and I get network timeout on the domain.
    Insyte : The warnings sound reasonable. The fact that it makes reference to waiting for eth0:0 is good news. What do your network interfaces look like after restarting networking? Specifically, what does the output of "`ifconfig -a`" look like?
    From Insyte
  • If you use the iproute package, you can put this in /etc/network/interfaces:

    auto eth0
    iface eth0 inet static
        address 10.0.0.17
        netmask 255.0.0.0
        gateway 10.0.0.1
        up   ip addr add 10.0.0.18 dev eth0
        down ip addr del 10.0.0.18 dev eth0
    
    From hop
  • Even Simpler:

    Use an "addresses" line in /etc/network/interfaces

    iface eth1 inet static
            address 10.10.0.66
            netmask 255.255.255.240
            network 10.10.0.64
            broadcast 10.10.0.79
            gateway 10.10.0.65
            addresses 10.10.0.67/28 10.10.0.67/28 10.10.0.68/28
    

    You can use a space seperated list of IPs/CIDR netmasks.

    This is a crippled version of my interface definition (IPs changed and only relevant part)

MySQL performance over a (local) network much slower than I would expect

MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace).

I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box.

Here are the results of my database query times:


Production:

Executed 291 queries in 320.33 milliseconds. (homepage)

Executed 517 queries in 999.81 milliseconds. (content page)

Development:

Executed 316 queries in 46.28 milliseconds. (homepage)

Executed 586 queries in 79.09 milliseconds. (content page)


As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server.

Why is this?!

One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes).

What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved.

I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

  • Do you have query caching enabled? Look at the output of:

    SHOW VARIABLES LIKE 'query_cache_size';
    

    If it's greater than zero, it's on. This will make quite a difference to the performance of a website backed by MySQL, though I'm not sure if it's anything to do with your specific issue here.

    I don't suppose there's any way to capture the set of queries and run them from a login on the MySQL database server, to see whether things are different there?

    Another possible issue is that if you're not doing session caching, you're opening a new session every time you make a database call. Do you know if you're session caching? Opening a new session for every call can get expensive across the network.

    : Yes, I have the query cache enabled in both production and development environments. Production : 134217728 Development: 16777216 I'm pretty sure that session caching must be being used, though I don't know how to test. I can not imagine that Drupal opens a new database connection for every query!
    From Morven
  • The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    You can put MySQLs tmpdir on a RAM-based filesys (e.g. tmpfs) to help improve performance.

    Cheers

    P.S.

    Maybe the performance hit is RAID writes in Production.

    : sure, but that still doesn't explain the difference between the two setups, which both have that problem.
    From Jason
  • What I would do to identify where the bottleneck is: I would do a profiling of the query: http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html

    If the numbers are way lower than what you can see from the same query you run on your web server, then it is a "transport" issue. (Rackspace network latency, switch config, large sets of data (SELECT *) with TEXT/BLOB fields returned, etc.)

    If the numbers are almost similar, you have to optimize your queries and improve your config file. Here is a link that may help: http://stackoverflow.com/questions/1191067/how-to-test-mysql-query-speed-with-less-inconsistencies/1191506#1191506

    BTW, when my server went from 100Mbs to 1Gb, the performance increase was amazing!

    : I tried profiling some queries in this way. It turns out that queries run from the production web server or database server take about the same amount of time to run. The same queries run on my laptop actually run slower than in production!
    : Although, what I was comparing there was possibly what you were getting at - I logged into MySQL on the web server and on the database server, ran the queries on both, and compared the profile data. Is that right? Or, do you mean to compare the query time reported by MySQl profiles (which wouldn't include any network latency) with that that is reported by drupal (which would include network latency)?
    Toto : Good point: you could run some queries from the web server and then directly from the database. The difference would be the "transport" time. Re-reading your question, I noticed that each page can generate +500 queries?! This is way to much. You definitively have to cache data (Memecache, html cache, etc.) to reduce the number of queries. Do not forget that PHP executes requests one after the other (In a same script)... Which means that you have too much communication between your 2 servers and it takes time. :(
    From Toto
  • The TCP connection setup and tear down is much more costly than the socket connection when working on localhost. Check how many connections to the database are initiated in a page load.

    : Max connections is set to 100. Max used connections shows consistently as 28. I think this number must relate to the 38 fcgi processes I have somehow. Only one connection is created per page load - that is how drupal works.
    From Craig
  • It's really hard to give you a definite answer. I really don't think the network is the bottleneck, well, unless you have some really heavy traffic on the server.

    Do you have the same exact configuration on both your development machine and the production server?

    Are you sure that all indexes were created successfully on the production server?

    It could the amount of data in your production server. I mean to compare the results between the production and development machines you should have the same number of rows in the tables, is this the case?

    Did you use the same installation files to setup MySQL on your development machine and production server? I mean do you have the same version of MySQL on both your development and production machines? Is it for the same OS? Also 32bit or 64bit? Probably a bug in the version of MySQL installed on the server.

    The possibilities are endless, without more details it will be really hard to give you an informative answer.

    : Network traffic between web and database servers isn't high at all. MySQL communication is happening on a dedicated interface, and the load on it is well within it's bounds. MySQL configuration on production and development machines is different (I'll update the main thread with the details in a minute). The two databases are identical, with the same amount of data in each (I regularly re-sync them). The production machine is using MySQL 5.0.45 x86_64. Development is using 5.0.75 x86_64.
    From

How to redirect subdomain to domain with subdomain as parameter

I would like to redirect my wildcard subdomain to my main domain using the subdomain as a GET parameter in the URL using .htaccess

Example of what I would like to happen:

billy.example.com

redirects to

www.example.com/profile?user_name=billy

Here is the current rule I have put together based on other answers on this site. However, when the redirect happens I get the following URL with a "Redirect Loop" error:

www.example.com/profile?user_name=www

RewriteCond %{HTTP_HOST} ^(.*).example.com
RewriteRule ^(.*)$ http://www.example.com/shop?user_name=%1 [R,L]

I've seen this answered a few different ways on this website but unfortunately none of them worked for me. Any help would be appreciated. Thank you

Details: I'm using PHP, and i have setup * as a subdomain in my hosting panel.

  • After trying to solve this using .htaccess and failing, I ended up using PHP to do the URL parsing for me. For those who wish to see what I did:

    $host = $_SERVER['HTTP_HOST'];
    $subdomain = str_replace('.example.com', '', $host);
    if($subdomain != 'www')
        header("Location: http://www.example.com/shop?user_name=".$subdomain);
    
    From justinl
  • Your redirect loop is happening because you are redirecting everything (even www.example.com). Maybe try something like:

    RewriteCond %{HTTP_HOST} !^www\.example\.com$ [NC]
    RewriteCond %{HTTP_HOST} ^([^\.]+)\.example\.com$ [NC]
    RewriteRule ^/(.*)$ http://www.example.com/shop?user=%1 [L,R=301]
    

    This should only fire when you're on something other than www.example.com.

    justinl : Hi Seth. I tried that code you gave me but it doesn't appear to redirect the page. When I enter username.example.com it just stays on the main homepage and the URL says username.example.com.
    From Seth
  • Does this do a lot of damage to the system performance compared to usual GET parameter already in the URL?

    From gary

sql server 2005 Replication

Hi friends,

We have a sql server 2005 which contains 4 to 5 databases which are updated everyday externally, Currently we are backing up the Database and attaching to a different server and working on it in order to make sure nothing is deleted or changed in the original database.

But This backup and restore process has become hassle so i have looked up couple of options like replcation services but i dont seem to have Replication components installed i tried to install them but i could only see Subcription services not the publisher servcies any ways we use sql server 2005 express edition is replication the best bet or do u suggest any other ways?

If so how do we get the replication components? And if not what are the other ways ?

Thanks in Advance

  • You might want to check out this article at databasejournal.com. Of particular note is this paragraph:

    You should keep in mind that replication functionality is not incorporated by default in the SQL Server 2005 Express Edition installation. The option controlling this behavior is accessible by expanding the Database Services node on the Feature Selection page of the setup wizard and can be modified by assigning "Will be installed on local hard drive" value to its Replication entry. In addition, if you intend to take advantage of the connectivity and Replication Management Objects (RMO), you should apply the same setting to the Connectivity Components subnode of the Clients Components node on the same page of the wizard. In case you missed these steps during initial setup, simply launch SQLEXPR32.EXE (or SQLEXPR.EXE for 64-bit systems) to modify an existing instance (for the background information regarding this process, refer to our earlier article).

  • Replication would really be like shooting a fly with an elephant gun.

    Why not just restore your database with a different name? For instance, you could have "MyProductionDB" and "MyTestDB". Then script out the backup and restore, and you need only run the script when you want to "refresh" the test instance (note that there is a "Generate Script" button at the top of the backup and restore dialogs).

    Alternatively, do the same scripted backup/restore, but create a test instance if you can't change tha database name. The test instance could even be on the same physical machine - MyComputer\SQLExpress and MyComputer\SQLExpressTest.

    Aaron Alton : RESTORE DATABASE in Books Online: http://msdn.microsoft.com/en-us/library/ms186858.aspx (Were you looking for something more specific?)
  • Replication services on publisher's side are not available with SQL Express. This is a licensing issue. Sql Express allows you only to subscribe to an existing publication.

    Instead of replication, you could decide to set a scheduled backup job. As job management is not available in sql express (and I guess you don't want to pay for the full Sql Server license), you can find third party software that will allow you to manage such tasks. I guess you should google for 'sql express job scheduler' or similar request.

  • Replication may be overkill here, since it's designed more for continuous transfer of data than just a working backup copy. Obviously there are periodic replication methods (snapshot), but I still think it's more than you need for what you're trying to do.

    If both database instances are on the same server (or have visibility to a common network location), then you can just schedule a daily backup on the first and a daily restore 30 minutes later on the second one (or whatever delay is feasible). This way, it happens automatically.

    I just set up a more complicated process, in case you don't have a common network location. A batch file on the source creates a backup through OSQL, then zips it and uses an FTP script to send it across the internet. Another script on our destination sees the file in the FTP receive folder, extracts it, and then restores it using OSQL, sending an email notification when it's done. Definitely more complicated, but also an option if you need the flexibility.

    From rwmnau

Does Mac OS X Server incorporate all the capabilities of Mac OS X?

I'm thinking about purchasing a copy of Snow Leopard Server as I'm setting up a web/mail server pretty soon. I also want to dig into things that the server has to offer. I'm looking into maybe getting a trial copy first (if possible). However, since I only have one personal Mac, I'm concerned with compatibility issues. Can Snow Leopard Server do anything (/run anything) that Snow Leopard would?

Thanks!

  • I'm not entirely sure about the Snow Leopard situation (haven't upgraded yet, until Apple fixed a few early issues), but that was indeed the situation for Leopard, and Tiger before it; I'd be amazed if that got broken in this latest release.

  • OS X server is pretty much the same as OS X Client. The only difference is that Server gives you a whole bunch of server applications and management tools to allow you to run a network of Macs.

    There may be some minor tuning of system parameters and such, but other than the additional software it's basically the same.

    Paul

Transaction log is full

Hi,

Can somebody please, let me know that how can we find out that the transaction log is full OR How can we figure out that what is causing the transaction log to grow.. Please, help.

Thanks,

  • This probably belongs on ServerFault.com

    Short answer is, however, lots of reads/writes is causing this. Increase the size of the log or run DB maintenance and backups more frequently.

    GilaMonster : Specifically log backups
  • The transaction log in a user database will grow because the database is set to full recovery, and the transaction log is not being backed up. If you don't need the ability to do point in time restores then you should change the database recovery from full to simple.

    If you do need point in time restores then you need to start backing up the transaction log.

    From mrdenny
  • Have a read through this, see if it helps - Managing Transaction logs

    If the log is filling up, generally either the max size is too small or you're running in full/bulk-logged recovery model and you're not backing the log up.

kmemsize problems in VPS even when there is about 500MB free mem

Hello,

I have a site hosted on a Plesk VPS with 512MB memory and keep on getting kmemsize in "black zone" QoS errors. The soft limit of kmemsize is 12,288,832 and hard limit is 13,517,715.

The definition Virtuozzo gives is: Size of unswappable memory, allocated by the operating system kernel.

What's eating up the kmemsize? Is there any way to reconfigure and increase the kmemsize? The servers barely have any load or processing.

Thanks for the help...

  • I think it is eaten by processes (and threads). Do you have lots of processes/threads running? And concerning increasing it - well, it's up to the hoster, you can't do anything.

    From wRAR

TF30004: The New Team Project Wizard encountered an unexpected error while initializing the Microsoft.ProjectCreationWizard.Portal plug-in.

Hello, i get this error when i trying to create a new project in team project. The server is right, i check all ports. I don't now what i should do now, can't find any good information

2009-09-19 01:45:41Z | Module: Internal | Team Foundation Server proxy retrieved | Completion time: 0.338 seconds
2009-09-19 01:45:41Z | Module: Internal | The template information for Team Foundation Server "TFSServer01" was retrieved from the Team Foundation Server. | Completion time: 0.099 seconds
2009-09-19 01:45:41Z | Module: Wizard | Retrieved IAuthorizationService proxy | Completion time: 0.404 seconds
2009-09-19 01:45:41Z | Module: Wizard | TF30227: Project creation permissions retrieved | Completion time: 0.015 seconds
2009-09-19 01:45:44Z | Module: Engine | Thread: 5 | New project will be created with the "MSF for Agile Software Development - v4.2" methodology
2009-09-19 01:45:44Z | Module: Engine | Retrieved IAuthorizationService proxy | Completion time: 0 seconds
2009-09-19 01:45:44Z | Module: Engine | TF30227: Project creation permissions retrieved | Completion time: 0.01 seconds
2009-09-19 01:45:45Z | Module: Engine | Wrote compressed process template file | Completion time: 0.001 seconds
2009-09-19 01:45:46Z | Module: Engine | Extracted process template file | Completion time: 1.428 seconds
2009-09-19 01:45:46Z | Module: Engine | Thread: 5 | Starting Project Creation for project "TestProject" in domain "TFSServer01"
2009-09-19 01:45:46Z | Module: Engine | The user identity information was retrieved from the Group Security Service | Completion time: 0.045 seconds
2009-09-19 01:45:46Z | Module: Initializer | Thread: 5 | The New Team Project Wizard is starting to initialize the plug-ins.
2009-09-19 01:45:46Z | Module: CssStructureUploader | Thread: 5 | Entering Initialize in CssStructureUploader
2009-09-19 01:45:46Z | Module: CssStructureUploader | Thread: 5 | Initialize for CssStructureUploader complete
2009-09-19 01:45:46Z | Module: Initializer | Thread: 5 | The New Team Project Wizard successfully Initialized the plug-in Microsoft.ProjectCreationWizard.Classification.
2009-09-19 01:45:46Z | Module: Rosetta | Thread: 5 | Entering Initialize in RosettaReportUploader
2009-09-19 01:45:48Z | Module: Rosetta | Thread: 5 | Exiting Initialize for RosettaReportUploader
2009-09-19 01:45:48Z | Module: Initializer | Thread: 5 | The New Team Project Wizard successfully Initialized the plug-in Microsoft.ProjectCreationWizard.Reporting.
2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Entering Initialize in WssSiteCreator
2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Site information: Title = "TestProject"  Description = "This team project was created based on the 'MSF for Agile Software Development - v4.2' process template."
2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Base site url: http://TFSServer01:14143/webbplatser
2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Admin site url: http://TFSServer01:16183/_vti_adm/admin.asmx
---begin Exception entry---
Time: 2009-09-19 01:46:27 Z 
Module: Initialize 
Event Description: TF30207: Initialization for plugin "Microsoft.ProjectCreationWizard.Portal 'failed 
Exception Type: Microsoft.TeamFoundation.Client.PcwException 
Exception Message: The client discovered that content-type of request is text / html; charset = utf-8, but the text / xml expected. 
The request failed with error message: 
-- 
Unable to connect to the configuration database.
--.
Stack Trace:
   vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckPermissions(ProjectCreationContext ctxt)
   vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.Initialize(ProjectCreationContext context)
   vid Microsoft.VisualStudio.TeamFoundation.EngineStarter.InitializePlugins(MsfTemplate template, PcwPluginCollection pluginCollection)
--   Inner Exception   --
Exception Type: System.InvalidOperationException 
Exception Message: The client discovered that content-type of request is text / html; charset = utf-8, but the text / xml expected. 
The request failed with error message:
--
Unable to connect to the configuration database.

--.
Stack Trace:
   vid System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
   vid System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
   vid Microsoft.TeamFoundation.Proxy.Portal.Admin.GetLanguages()
   vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckPermissions(ProjectCreationContext ctxt)
-- end Inner Exception --
--- end Exception entry ---

Thanks for you help

Strip text from some delimited fields in a list

Hello all!

Wondering what the fastest way would be to strip the text from just some delimited fields in a list.

My list looks like this:
text text:number:text:text:text:text:*:*:*:*:*:*:*:*:*:*:*:*:*:*:*:*

And I want it to look like this:
text text:*:*:*:*:text:*:*:*:*:*:*:*:*:*:*:*:*:*:*:*:*

So some fields that have data need to be replaced by asterisks, some fields need to be untouched, and the delimiter isn't consistent (first and second field separated by spaces). This is on a linux filesystem and a way to do it inline on the file is preferred.

Thanks much for the help!

  • I would use a regular expression that matches your text (parentheses capture text into buffer designated by \1, \2, etc):

    (.*):([0-9]+):(.*):(.*):(.*):(.*):(.*):(.*):(.*):(.*):(.*):(.*)
    

    and a replacement regular expression:

    \1:\2:\*:\*:\*:\6:\*:\*:\*:\*:\*:\*
    

    with sed:

    sed 's/matching-regularexpression/replacement-regular-expression/' name-of-text-file
    

    You may have to fiddle with escape characters a little depending on your shell.

    Greeblesnort : don't forget sed -f if you've got multiple changes to make....
  • I'd probably do it in perl:

    perl -pi.bak -e 's/^(\w+\s\w+):\d+:\w+:\w+:\w+:(.*)/$1:$2/' FILENAME

How to pass Apache SSL traffic trough nginx proxy?

I want to setup nginx 0.6.33 on Fedora 8 as a load balancer for backends running Apache. I had no problems to configure it for default http port 80, but I don't know how to do it for SSL (443). I don't want to install the SSL certificates on the nginx box, I'd like it to pass the whole traffic through to the Apache servers which have the certificates already installed.

My configuration looks like this:

http {
  upstream backend{
    server 192.168.0.1;
    server 192.168.0.2;
  }

  upstream secure{
    server 192.168.0.1:443;
    server 192.168.0.2:443;
  }

  server{
    listen 80;
    server_name www.my-server.net;
    location / {
      proxy_pass http://backend;
      proxy_set_header Host $http_host;
      proxy_redirect false;
    }
  }

  server{
    listen 443;
    server_name www.my-server.net;
    location / {
      proxy_pass https://secure;
      proxy_set_header Host $http_host;
      proxy_redirect false;
    }
  }
}
  • Don't use nginx as a load balancer; it isn't designed for that sort of thing. Specifically, in your case, it doesn't support transparently passing TCP connections to a backend, so you'll need to have your SSL certs setup in nginx and pass back to Apache as regular HTTP.

    You want either a true L3 load balancer like LVS, or else a TCP proxy like haproxy, that will allow the SSL connections to just flow straight through to Apache.

    From womble

Tunnel web request via ssh

Hello, I am looking to give access a website which is blocked outside this country, to a friend who is located outside the county. I have a linux box here inside the county.

i have a user set up on the local box bob here is the setup so far, the remote user has put xx.xx.xx.xx:4444 in as their proxy in their broswer.

What I am looking for the correct ssh command which when run on the local box, will listen for traffic on port 4444 and forward the resquests on.

I have tried this but it return blank pages: ssh bob@localhost -g -D 1900

thanks in advance

.k

  • Using ssh for this only makes sense if they're opening the ssh session from their end. That would create a proxy on their box that would exit from your box, inside the country. This works great with putty on Windows as well as with normal *nix ssh clients.

    It would work like this:

    1. On their box, they would run ssh -D 4444 yourserver.
    2. Then they would configure their web browser to point to "localhost:4444" as the proxy server.
    3. The local ssh client would accept the proxy request, forward it through the ssh tunnel to your server, where it would exit out to the internet.

    If, however, you just want to set up a proxy server on your box that they can connect to directly from their browser (by configuring your box as the proxy server) then you want something like squid.

    Keet : Thanks for the reply, Just for the record, this is possible, opening the ssh at the server end.[This was for a short time, and I am sure it would be dodgy to leave it open for longer.] running 'ssh bob@localhost -g -D 4444' on the server box, and setting the remote users browser to use socks proxy to the server box ip, port 4444 worked ok. .k
    From Insyte
  • You should also consider the implication of your actions. There is probably a reason why things are blocked in their country. You may want to consider what happens if they are discovered by their government, trying to circumvent controls. With great power comes great responsibility.

    Keet : considered of course...
    David Collantes : Yet, your advise was uncalled, and not technically related at all. Let's stick to what's asked and not pretend we are 'dad', or 'mom'...
    sybreon : Although not technical, I still feel that it is fair advice. If the OP had already considered it, fine and well. But if not, it should be something to consider, aside from the mere technical issues.
    From sybreon
  • Opps sorry, This was workingn all along, just a dumb mistake by me,

    thanks anyway .k

    From Keet

Creating a SQL server Maintenance Plan without the wizard

Is there a way to generate a SQL server maintenance plan that will be the same as the one generated by the wizard, but without going through the wizard dialog. I need something that can be created automatically as part as an install process.

Also, it looks like what the SQL wizard is generating, is specific to the current schema (for example for re-indexing tables). Will the wizard generated plan break on any schema change? If so, is there a way to update it, other that regenerating the whole thing again?

  • There's no way (that I've ever seen) to create a maintenance plan without going through either the wizard or the the designer. It's not possible using SMO. You could create an SSIS package that will perform your maintenance plan tasks and deploy that during your installer. Based on your comments above, though, I would suggest writing a couple stored procedures that will perform your tasks. Anything you can do with maintenance plans, you can write TSQL for.

    From squillman
  • Maintenance plans in SSMS are stored as SSIS packages, so aren't scriptable. But you can store them as files and in theory import them into an instance of SQL Server :

    http://social.msdn.microsoft.com/forums/en-US/sqlsmoanddmo/thread/ac65cef1-22ee-454f-9947-071a343a0cd9/

    However, i think it's much easier to just script the SQL a package runs instead, as mentioned by squillman.

  • I would recommend having a set of SSIS packages that do your maintenance plan stuff - there are tasks do that in the Toolbox. Store them on a shared network drive.

    Have them all use the same connection manager. Then, it's extremely easy to execute the maintenance plan style package, changing the connection manager's instance name with the /set option. You can run the package from the command line, from SQL Agent, wherever you like.

    As well as doing this, put them on a USB stick on your keyring, so that you can always have them handy.

    From Rob Farley
  • Rather than using maintenance plans, consider using home-grown stored procedures.

    Here's an excellent example.

    http://ola.hallengren.com/

    (And don't shrink your data files).

Extremely High V_$SYSTEM_EVENT Wait Times on Oracle DB

I'm working on troubleshooting an oracle DB that's having some general performance problems. I ran the following query:

SELECT event AS "Event|Name",
       total_waits "Total|Waits",
       round(time_waited / 100, 0) "Seconds|Waiting",
       total_timeouts "Total|Timeouts",
       average_wait / 100 "Average|Wait|(in secs)"
  FROM sys.v_$system_event e
  ORDER BY time_waited DESC;

The first few lines returned as follows. Millions of seconds of wait time! (By comparison our other DBs are < 10 seconds of wait time for the top events.) What do these events do and what could cause these massive wait times? The DB has been up for 30 days so we're seeing aggregation over that much time.

Event Name                                 Waits    Seconds Timeouts  Avg Wait
----------------------                 ---------   -------- --------  --------
SQL*Net message from client            488397968   32050594        0    0.0656
rdbms ipc message                       91335556    2455744  9529486    0.0269
DIAG idle wait                           5214769     347077  5214769    0.0666
Streams AQ: qmn coordinator idle wait     186521     173696    93278    0.9312
Streams AQ: qmn slave idle wait            95359     173692       51    1.8215
Space Manager: slave idle wait            523165     173647   521016    0.3319
pmon timer                                968303     173630   870108    0.1793
fbar timer                                  8770     173403     8713   19.7723
smon timer                                 14103     173278     7006   12.2866
log file sync                           57967889      90402   649458    0.0016
og file parallel write                  86618366      39509        0    0.0005
db file sequential read                244286101      11171        0         0
control file parallel write              1274395       3949        0    0.0031
db file scattered read                 157316868       1635        0         0
db file parallel read                   11948170       1190        0    0.0001
  • "SQL*Net message from client" is the time spent by the database waiting to be asked to do something by a client (I would also interpret this to be an indicator of the number of SQL*Net requests processed by the database). AskTom has more information about the event. It doesn't look like a very long average wait, either, so perhaps you've got an app that's making LOTS of small requests to the server? That's a lot of waits in 30 days (average of 16M per day).

    As for the rdbms ipc message, this means (Oracle 10g Reference):

    "The background processes (LGWR, DBWR, LMS0) use this event to indicate that they are idle and are waiting for the foreground processes to send them an IPC message to do some work."

    This is generally a non event from a tuning perspective. (Burleson)

    Gary : Just to add that this may be a 'traditional' client/server type app where people log in at 9:00 and log off at 17:00 and have a database session all that times, mostly doing nothing.
    DCookie : Except in that case you would see high wait times but relatively few wait events, no?
    jeffspost : I've done some more research and it appears basically every SQL*Net message from client event is generating a small wait time. It's sort of like the system is working in slow-motion. Periodically through the day we have system wide slowdowns. I suspect at the root this is the cause--slower operations = more operations, eventually filling the capacity of the server. Any idea of why there would be an across the board delay for client responses?
    DCookie : This message means Oracle is waiting for the client to give it something to do. So, the issue is with the client apps or with the network. Anyone checked the NIC on the server? They do go bad sometimes. Switches and routers can go bad as well. Perhaps changing the port on a switch or router could eliminate that as a possibility.
    DCookie : Also, cables should be checked.
    From DCookie

Fixed Length Field Export from SQL Server

How do I go about exporting SQL Server 2005 tables to a fixed length field text file, with one record per line? SQL seems to think fixed length is the only way to delimit records, i.e. fieldLen1 + fieldLen2 + ... + fieldLenN = recLen. I can't find any way of delimiting each record with a newline.

Also, when I tried the wizard from a normal DB engine connection, it puked on a column name called "ShortLocation". I fail to see what could be difficult about writing that out to a text file, but I gave up on the wizard and am trying a SSIS package, but as soon as I make the format "Fixed Width" I lose the row delimiter option.

Surely my requirement is not that unheard of?

SQL Server Management Studio not scripting all objects

i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some.

i can provide detailed screen shots

  • the options being selected
  • including all tables
  • the folder where the script files will go
  • the folder being empty before scripting
  • the scripting process saying Sucess when scripting a table
  • the destination folder no longer empty, with a hundred or so script files
  • the script of some tables not being in the folder.

And earlier SSMS would not script some views.

Is this a known thing that the the Generate Scripts task does not generate scripts?


Update

Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket.

Fails on SQL Server 2005, also fails on SQL Server 2008.


Update Two

Some basic questions:

1.What version of SQL Server?

 Microsoft SQL Server 2000 - 8.00.194 (Intel X86)
 Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86)
 Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86)
 Microsoft SQL Server 2005 Management Studio: 9.00.4035.00
 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22

2.What O/S are you running on?

 Windows Server 2000
 Windows Server 2003
 Windows Server 2008

3.How are you logging in to SQL server?

 sa/password
 Trusted authentication

4.Have you verified your account has full access to all objects?

 Yes, i have access to all objects.

5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable)

 Yes, all objects work fine.
 SQL Server Enterprise Manager can script the objects fine.


Update Three

They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager:

Client Tools  SQL Server 2000  SQL Server 2005  SQL Server 2008
============  ===============  ===============  ===============
2000          Yes              n/a              n/a
2005          No               No               No
2008          No               No               No

Update Four

No errors found in the database using:

DBCC CHECKDB
go
DBCC CHECKCONSTRAINTS
go
DBCC CHECKFILEGROUP
go
DBCC CHECKIDENT
go
DBCC CHECKCATALOG
go
EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')'

Honk if you hate SSMS.

  • I have never had this problem before.

    Some basic questions:

    1. What version of SQL Server?
    2. What O/S are you running on?
    3. How are you logging in to SQL server?
    4. Have you verified your account has full access to all objects?
    5. Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable)

    With some further information we maybe able to offer some help.

    Update 1:

    Have you tried detaching your 2000 Database and reattaching to a SQL 2005/2008 instance and then raising the database level, and then try your scripting?

    From Wayne
  • I wrote a command-line utility to script all MSSQL objects via SMO.

    It would be interesting to know whether it manages to script all of your objects. If the tool fails for a single object, the SMO exception is written to stderr.

    From devio