Sunday, January 23, 2011

Very large database, very small portion most being retrieved in real time

Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB.

Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them.

Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner

How can I make sure that the latter is always being kept in memory? (there is more than enough space for these)

More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

  • The best you can probably do is examine execution plans for your long running queries and tune 1) the query and 2) the database appropriately. You could build indexes for the "identifiable parts of certain tables" to speed queries. You could also move your more frequently used data into its own table, and the less frequently used data into its own.

    Doing this with JSON blobs will be difficult because if you need access to one attribute of the JSON blob, you will have to fetch and parse the whole blob. If your JSON blobs are in a consistent format, build a real table structure to reflect that, and you'll probably 1) already have improved the performance and 2) have a much more flexible structure when you need to performance tune later.

    From Shin
  • There's a lot of options here. First, NDB is MySQL's clustering engine, which stores data in memory. NDB does have some limitations, however.

    memcached is a popular solution that is often used but it requires the application architecture to support it.

    You could have MyISAM tables that you specifically store within a RAM disk, as they are able to be relocated individually unlike with InnoDB. InnoDB's entire table space would have to be stored on the RAM disk.

    You may find the memory engine better suited than my RAM disk hack, however. They're also more limited than other engines, as they can't support BLOBs among other things. For the data to maintained, you would have to have a wrapper script to dump and restore the data. This also introduces risk to the data, as a power loss even with scripts would result in data loss.

    Ultimately, you will likely benefit the most from properly tuning and optimizing your MySQL database and queries. A properly tuning MySQL database utilizes memory caching.

    There's a lot of resources available on this already both on Serverfault and the Internet as a whole. MySQL has a document and here's a MySQL performance blog post, which are both very useful resources. Here's another post where they have a formula for calculating InnoDB memory usage.

    TomTom : useless given that the post already talks of ONLY 8gb memory in the machine, you know.
    Warner : Perhaps you should re-read my post. Is English not your native language?
    From Warner

BIND split-view DNS config problem

We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is.

options {
        directory "/var/cache/bind";
        forwarders {
         8.8.8.8;
         8.8.4.4;
        };
        auth-nxdomain no;    # conform to RFC1035
        listen-on-v6 { any; };
};
zone "." {
        type hint;
        file "/etc/bind/db.root";
};
zone "localhost" {
        type master;
        file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
        type master;
        file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
        type master;
        file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
        type master;
        file "/etc/bind/db.255";
};
view "internal" {
      zone "example.com" {
              type master;
              notify no;
              file "/etc/bind/db.example.com";
      };
};
zone "example.corp" {
        type master;
        file "/etc/bind/db.example.corp";
};
zone "100.168.192.in-addr.arpa" {
        type master;
        notify no;
        file "/etc/bind/db.192";
};

I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine.

Any advice on what I might be missing?

  • First, check your logs, but I think you forget

    acl "lan_hosts" {
        192.168.0.0/24;             # network address of your local LAN
        127.0.0.1;              # allow loop back
    };
    view "internal" {
            match-clients { lan_hosts; };   
    [...]
    };
    
    organicveggie : Actually, match-clients is not required. From http://www.zytrax.com/books/dns/ch7/view.html, "If either or both of match-clients and match-destinations are missing they default to any (all hosts match)."
    From Dom
  • Post what named said.

    organicveggie : Huh. Didn't know about "named-checkconf" until now: # named-checkconf /etc/bind/named.conf:12: when using 'view' statements, all zones must be in views
    From urmum

Webserver concurrent connections

Where can I get statistics of concurrent connections that can be handled by Apache and IIS? Which one will serve more requests in peak times?

Thank you, Sri

  • That's hard to answer. As always, the answer to questions like this is "it depends." What type of requests? Static? Dynamic? Large? Small? Internal? External? A bit more information about your environment is needed to answer with any degree of accuracy.

    From McJeff

Where should I store the VM files (config, snapshots, vhd) on a Hyper-V server?

By default the VHDs go into “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” and the config files go into “C:\ProgramData\Microsoft\Windows\Hyper-V”.

Should I leave them there?

Is it ok for the VHDs to be in a “Public” folder?

  • Here is what I do:

    • I have pretty much always a RAID 10 for the Hyper-V host, 4 discs. Either BLack Scorpio (lower performance) or Velociraptors.

    • 64gb base partition

    • The rest is a second partion "V:"

    • VM's live on V.

    • Public is not ok - i mean, seriously, what for?

    aduljr : you don't want your server images on a public drive. I would keep the server images on a separate disk system of either raid 1 or raid 10 depending on your performance needs, snapshots I would store on a different drive or storage server for backup and retrieval purposes. I guess the question to really ask is, what are you doing to begin with? Are you learning within a home environment or lab setup? Or is this going to be something that will go into production?
    TomTom : Well, normally the OS discs basically do nthing. After startup you can ignore them - as long as nothing else than hyper-v runs on them.The Raid 10 stops me from having to have more discs in. Me personally I do that with a lot of setups - sometimes quite high performance (32gb server, 4 cores, running sql server with directly mapped discs for the real data).
    From TomTom
  • In general, you'll want your VMs on a disk subsystem that is redundant and shared with every member of the Hyper-V cluster. This will almost never be C:.

How to setup Joomla CMS as a backend for iPhone app

I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS.

I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content.

I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app.

So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc)

Are there plugins/extensions/components I can use to achieve the same.

Any advice on how this can be achieved would be helpful.

I am completely new to setting up/using CMS.

  • I would use Osmek instead of joomla. Osmek is a cms that is specifically tailored to transfer json and xml through their api, so no hacking is involved. Their API is the base of their service, and works through an HTTP request via POST data.

    srik : although i was looking at setting up something at my own server using existing cms like joomla or drupal.. the free version of osmek serves the purpose for simple applications. thanks.

Network latency -- how long does it take for a packet to travel halfway around the world?

Possible Duplicates:
How does geography affect network latency?
How much network latency is “typical” for east - west coast USA?

If I'm hosting an app in NY, what kind of delay can I expect for a user to get a single packet if they're in Australia, i.e. roughly the maximum distance from NY?

I'm looking for the maximum latency I'm likely to encounter on a regular basis -- if Australia's not the right destination point to consider, feel free to substitute another point.

Thanks!

Michael

  • This is hard to come by information, as You might know, because it is related to the individual endpoints and the way they are switched to the destination, especially partly even how their individual provider is switched and prioritized over the deep sea cables between the continents. Also it wildly differs from desination continent to destination continet as sometimes traffic for one continet is routed throgh another continent (read sea cables). Also it seems that the part between fiber lines and each endpoint is what brings the latency, so it seems to be more about the customer's internet connection than the Backbone You're sitting on. Be sure to have a look at the links provided by Zypher and Ward.

    If latency is a problem, think about a content delivery network, which serves each continent on the continent. That might help if You don't need data written to Your NY server in realtime.

    Cisco has some tiny bits for VOIP, but worthwile to read, there is a forum thread with some user measurements. The numbers differ wildly but never forget that users often mix up ping and lateny (like in the forum posts).

    Personally I would expect about 200 milliseconds end to end just to be sure.

    The thing I would do, would be to take an edonkey client with latency readings (I believe azureus has this) and have a look at connections from my destination to interesting spots on the map. That way You have raeal life end to end latency data.

  • If you had a fiber optic cable straight from NY to Sydney, the distance latency by itself would be ~90ms. Realistically you'd be lucky to stay under 200ms.

    From Chris S

PHP FastCGI SAPI: Reloading PHP Configuration

I am using PHP FastCGI SAPI on my web hosting environment to run PHP applications. To spawn FCGI processes I use spawn-fcgi helper program. My problem is whenever I make a change to php.ini file, I have to kill and respawn each FastCGI server for the new configuration to take effect.

Is there a way to reload PHP configuration(ie. php.ini directives) without respawning each FastCGI server? I try sending hangup signal (ie. kill -HUP PHPCGIPID) to the servers but this will result in termination of the servers.

  • As far as I know, PHP's FastCGI interpreter doesn't react to signals like HUP, USR1 or USR2 to reload its configuration.

    Maybe PHP-FPM could help you to achieve what you want. On the downside, it requires patching PHP.

    From joschi
  • If the servers are spawned automatically, kill them. If they’re manually started, restart them. PHP doesn’t have the ability to reload its own configuration — and generally, killing/restarting is not a problem. Is there a reason why you can’t kill them in this instance?

    From Mo

Plesk 9.2.1 reporting much more SMTP traffic than the logs indicate

Plesk is reporting nearly 7GB of SMTP traffic so far this month on one domain, most of it outgoing. However, after running qmail's mail logs (which only go back to May 8) through Sawmill, only about 900MB of traffic on that domain is accounted for.

What I know so far:

  • Email sent via PHP's mail() function is sent through sendmail, which has been logging its output via syslog to the same logs that qmail uses, at /usr/local/psa/var/log/

  • Messages sent by logging in directly via Telnet are logged as well

  • I verified that Plesk is reporting totals correctly by creating a new domain, sending some large emails through it, running Plesk's statistics calculation script, and comparing its reported totals to the actual size of the emails sent

  • The problem domain did have three mail accounts with blank or insecure passwords, which I corrected

Does anyone know how Plesk calculates SMTP traffic statistics? Are there some log files elsewhere that I'm missing? What kind of SMTP traffic would Plesk know about that isn't being logged?


EDIT: Since fixing the blank/weak mail passwords on the domain, the traffic seems to have returned to normal, but I still haven't solved how Plesk calculated those amounts if they aren't showing in the logs.

  • Hello,

    I have been struggling with this problem for 2 months. Could you find a solution?

    From Seha

Can't uninstall windows service

I have somehow managed to half uninstall a windows service I was developing.

In no particular order

  • It won't delete if I use sc delete servicename

  • It gives an exception using installutil /u pathtoservice.exe

    "specified service does not exist as an installed service"

  • And using the installer/uninstaller obviously doesn't work either

  • It's no longer in the Services listing

  • It's not shown if I use sc query

  • And I have rebooted

I don't know what else to do, but something still exists, because attempting to install fails because it already exists.

Please help.

UPDATE:

...Could it be the stuff in the registry elsewhere?

HKEY_CLASSES_ROOT\Installer\Assemblies\D:|Program Files|[path to].exe

HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Assemblies\D:|Program Files|[path to].exe

HKEY_CURRENT_USER\Software\Microsoft\Windows\ShellNoRoam\MUICache

HKEY_CLASSES_ROOT\Installer\Products...

HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products...

etc...

  • I'd try downloading the sysinternals suite from Microsoft (free download) and run procmon and/or regmon and/or Filemon (although procmon is supposed to merge their functionality) and see where in the install process there's a fail or denied message and look there to hunt down why your installer thinks it's already installed. Hopefully it might give a clue as to what's going on.

    Chad : @Bart Silverstrim, I don't see anything, but I'm not sure what I'm looking for, I have 2810 events when running InstallUtil
    Bart Silverstrim : Anything that says "denied" or "failed" as a result, or at the point where you have the failure towards the end of the program run you stop monitoring and see what is happening near the end of the trace, what the last files and keys were that were being accessed :-)
  • Finally found a combination that worked.

    • UnInstall with InstallUtil /u

    • ReInstall with InstallUtil /i

    • Then uninstall with the SetupProject

    If I tried to uninstall with InstallUtil, it wasn't right. If I tried to install with the Setup project it failed.

    It was this specific combination that got everything into the correct state again so that I can use the setup project to once again install AND uninstall.

    ...not to self. Never hit cancel during the installation of a service again!

    From Chad

Using ADFS 2.0 for Google apps single sign on

Microsoft Active Directory Federation Services 2.0 has been recently released, and it has passed interoperability tests for SAML 2.0.

Does this mean that is can be used to authenticate users of Google Apps which also uses SAML?

Has anyone successfully setup Google apps with ADFS 2.0 for single sign on?

If you have gotten it to work please tell us what is required to get this working?

To put it another way, does someone have a good HOWTO for using ADFS 2.0 and Google Apps together? I was not able to find anything through a search of the web.

How to connect to IIS and SQL Server Express on Windows 7 host from XP Mode

Hello,

I am running IIS and SQL Server 2008 Express on my Windows 7 host, and I'd like to be able to connect to them in XP Mode. My host machine is not a part of a domain, only a workgroup.

So far, I've tried these instructions on connecting to SQL Server, but I'm not able to telnet to port 1433 on the host from XP Mode. I'm also not able to connect using a SQL client.

I'm not able to connect to IIS on the host from XP Mode.

Advice from those who have had success doing this would be appreciated.

Thanks,

Jon.

Update: I'm wondering if this is a simple permissions issue. I'm on a workgroup, not a domain, so I can't test/fix this simply by granting permissions in SQL Server and IIS to a domain user.

  • The end of those instructions talk about Firewalls... Did you check your firewall on the host? If it is on turn it off just to test if that's the issue. Then if it is the issue open the necessary ports. If that is not it think about if any other firewalls are between your host and client.

    Hope its as simple as that. Let us know.

    SpatialBridge : Yes, turned firewall off both on host and virtual machine. Still no go. Using Microsoft Loopback Adapter, so should be no other firewalls in play.
    Campo : Are you allowing remote connection for SQL? The setting is in the Management Studio.
    SpatialBridge : Yes, remote connections are allowed.
    Campo : Is it a named instance?
    SpatialBridge : Turned out to be a permissions issue -- see answer below.
    From Campo
  • This turned out to be a permissions issue. Since I am not on a domain, I was trying to connect to the host machine from the virtual machine using a local account on the virtual machine.

    To fix the problem, I modified the

    The steps I used are as follows:

    1. In the virtual machine, click Start > Control Panels > Users.
    2. Click the name of the current user (or whichever user you're using to run the client apps that are connecting to the host machine).
    3. Click "Manage my Network Passwords".
    4. Click Add to open the Logon Information Properties dialog.
    5. Type the name of the host machine in the Server text box.
    6. Type the name of the appropriate host machine local account in the User name text box (\).
    7. Type the password for the host machine local account in the Password text box.
    8. Click OK > Close.

    I also had to open the Windows firewall to HTTP requests to be able to access IIS.

    Following this procedure, I can now access SQL Server and IIS from the XP Mode virtual machine.

    Campo : NICE! good call.

Apache rewrite rules and special characters

I have a server where some files have an actual %20 in their name (they are generated by an automated tool which handles spaces this way, and I can't do anything about this); this is not a space: it's "%" followed by "2" followed by "0".

On this server, there is an Apache web server, and there are some web pages which links to those files, using their name in URLs like http://servername/file%20with%20a%20name%20like%20this.html; those pages are also generated by the same tool, so I (again!) can't do anything about that. A full search-and-replace on all files, pages and URLs is out of question here.

The problem: when Apache gets called with an URL like the one above, it (correctly) translates the "%20"s into spaces, and then of course it can't find the files, because they don't have actuale spaces in their names.

How can I solve this?

I discovered than by using an URL like http://servername/file%2520name.html it works nicely, because then Apache translates "%25" into a "%" sign, and thus the correct filename gets built.

I tried using an Apache rewrite rule, and I can succesfully replace spaces with hypens with a syntax like this:

RewriteRule    (.*)\ (.*)      $1-$2

The problem: when I try to replace them with a "%2520" sequence, this just doesn't happen. If I use

RewriteRule    (.*)\ (.*)      $1%2520$2

then the resulting URL is http://servername/file520name.html; I've tried "%25" too, but then I only get a "5"; it just looks like the initial "%2" gets somewhat discarded.

The questions:

  • How can I build such a regexp to replace spaces with "%2520"?
  • Is this the only way I can deal with this issue (other than a full search-and-replace which, as I said, can't be done), or do you have any better idea?

Edit:

Escaping was the key, it works using this rule:

RewriteRule    (.*)\ (.*)      $1\%2520$2

But it only works if there is one "%20" in the initial URL; I get an "internal server error" if there is more than one.

Looks like I'm almost there... please help :-)


Edit 2:

I was able to get it to work for two spaces using the following rule:

RewriteRule    (.*)\ (.*)\ (.*)     $1\%2520$2\%2520$3

This is enough for my needs, as URLs generated by the tool can only contain at most two "%20"s; but, out of curiosity: is there any way to make this work with any number of spaces? It works with the first rule if replacing any number of spaces with a normal character, this problem happens only when special characters are involved.

  • The % is being read as a back reference, so you need to escape the %.

    Massimo : Ok, but **how**? I tried "%%25" and "\%25", but both didn't work.
    Massimo : Ok, it worked using "$1\%2520$2", but see my edit on the main question for another problem.
    Nerdling : You can nest parentheses: to handle any number of something and catch it: ((pattern)*) You won't be able to reference these in the URL rewrite as the quantity may be infinite.
    From Nerdling

what do I need to run python on my webserver ?

what do I need to run python on my webserver ?

Should I enable some Apache module ?

Thanks

  • Yea, you need mod_python.

    Or you can enable cgi support by putting in a AddHandler cgi-script .py directive in your httpd.conf...You'll need to put a cgi shebang on the first line of your script as well (i.e. !#/usr/bin/python).

  • Make sure you also look at mod_wsgi. It's a nice alternative to running mod_python. Also, depending on your application, FastCGI is also an option (see packages like flup, which kind of acts as a bridge between FastCGI & WSGI).

    David Zaslavsky : +1 mod_wsgi is the preferred way of doing it these days.
    From McJeff

mail server administration

MY postfix does not show that it is listening to the smtp daemon getting mesaage below:

The message WAS NOT relayed Reporting-MTA: dns; mail.mak.ac.ug Received-From-MTA: smtp; mail.mak.ac.ug ([127.0.0.1]) Arrival-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT)

Original-Recipient: rfc822;dicts-test@muklists.mak.ac.ug Final-Recipient: rfc822;dicts-test@muklists.mak.ac.ug Action: failed Status: 5.4.0 Remote-MTA: dns; 127.0.0.1 Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops Last-Attempt-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Final-Log-ID: 23434-08/A38QHg8z+0r7 undeliverable mail MTA BLOCKED

OUTPUT FROM lsof -i tcp:25 command

master 3014 root 12u IPv4 9429 TCP *:smtp (LISTEN) (Postfix as a user is missing )

  • Your postfix is it the final hop for muklists.mak.ac.ug ? Check your DNS entries for this domain. Is there an error ?

    I think postfix say "Hey it's not for me, I send to muklists.mak.ac.ug. Who is muklists.mak.ac.ug ? 127.0.0.1 say DNS or /etc/hosts. Then it is send to 127.0.0.1, which say Hey it's not for me"

    You can maybe check your logs too : you should see a large loop, because hopcount_limit is 50 by default

    yasin inat : Double check MX record of mail.mak.ac.ug. you have to do it within linux box terminal & a third party dns resolver like intodns.com or checkdns.net http://www.intodns.com/mak.ac.ug
    From Dom

SharePoint 2010 and Excel Calculation Services

I'm curious what the requirements are for Excel Calculation Services in Sharepoint 2010. I found an architecture document but it doesn't list out the specific requirements. (architecture I understand that you can install all the services on one server but it isn't recommended. It then talks about how you can scale up the application servers, and web front ends. What should the hardware look like for both the application server and web front ends? Do I need to setup a standalone box for each application server?

How to synchronize iis between 2008r2 64bit to 2008 32 bit

Hi,

Does anyone know of a way to synchronize iis between 2008r2 64bit to 2008 32 bit. Msdeploy doesn't seem to like going from 64bit to 32 bit. :(

Update: The command I'm running is this (on 2008 32 bit):

msdeploy -verb:sync -source:webserver,computerName=remote -dest:webserver

Output is: Error: Using a 64-bit source and a 32-bit destination with provider webServer is not supported

  • I wouldn't say sync IIS. Would you be interested in replicating the websites physical files?

    If so check out DFS as it can span 32 and 64 bit systems you just need a domain.

    It is VERY easy to setup and works great! I use it to sync our constantly changing social network between two servers. I sync user files as well as new launches of the site from the production server to the standby server.

    Here is a link to the technet overview.

    Hope that helps. May be a lot simpler this way. But may not be what you are trying to do.

    Robert Ivanc : thanks, unfortunately the machines are not on the same domain (not even on the same LAN).
    Campo : Are you looking for a real (or near real) time sync?
    Robert Ivanc : just a sync when I run the command. doesn't need to be realtime.
    Campo : This is for files or IIS settings of the actual websites?
    Robert Ivanc : for everything. msdeploy works now the other way around, as I described above
    From Campo
  • Well, I sort of managed to get it to work. The trick is to use msdeploy.exe from Program Files(x86) on 64 bit 2008 r2 server, and run sync from there TO 32 bit 2008.

    Also, since there is no AppWarmup module on 2008 server you have to add: -skip:attributes.name="AppWarmupModule"

    It's not ideal, since I'd prefer to run it the other way around, but it does work.

Login error in phpMyAdmin, problem setting auth_type in config.inc.php

I'm having a problem accessing phpMyAdmin.

A few weeks ago I did succeed configuring it for auth_type = 'cookie', but I still receive an error stating that I should have to set blowfish_secret. That was strange because it was set.

So I changed auth_type from cookie to http, but it didn't work. I changed it back to cookie, but it doesn't work anymore.

this is the error.

phpMyAdmin - Error

Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly.

this is my C:\wamp\apps\phpmyadmin3.2.0.1\config.inc.php

<?php

/* Servers configuration */
$i = 0;

/* Server: localhost [1] */
$i++;
$cfg['Servers'][$i]['verbose'] = 'localhost';
$cfg['Servers'][$i]['host'] = 'localhost';
$cfg['Servers'][$i]['port'] = '';
$cfg['Servers'][$i]['socket'] = '';
$cfg['Servers'][$i]['connect_type'] = 'tcp';
$cfg['Servers'][$i]['extension'] = 'mysqli';
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['user'] = '';
$cfg['Servers'][$i]['password'] = '';
$cfg['Servers'][$i]['AllowNoPassword'] = false;
// EDIT:
// $cfg['Servers'][$i]['blowfish_secret'] = 'this is my passphrase';
$cfg['blowfish_secret'] = 'this is my passphrase';

/* End of servers configuration */

$cfg['DefaultLang'] = 'en-utf-8';
$cfg['ServerDefault'] = 1;
$cfg['UploadDir'] = '';
$cfg['SaveDir'] = '';

?>

I changed the blowfish_secret, since I don't remember the old one, and I deleted the cookies in my browser and restartd all wamp services and the browser. After I enter username and password in the login page I get the error.

I've tried searching into the log files, but I'm a newbie and I'm not sure I've searched the right ones.

I'm using Wamp server 2.0 that has Apache Version : 2.2.11
PHP Version : 5.3.0
MySQL Version : 5.1.36
phpmyadmin : 3.2.0.1

EDIT:

I've changed to auth_type='config' setting username and password, then accessed phpmyadmin with success. Then I changed again to auth_type='config', using the same config file as above and it works again.
But now at the end of phpmyadmin main page I get the error:

The configuration file now needs a secret passphrase (blowfish_secret).

that is nonsense, since the blowfish_secret is set.

EDIT:

I've solved the last problem changing configuration line to set the blowfish_secret. See the above configuration just after // EDIT

  • I don't think blowfish_secret is a per-server setting. It should just be:

    $cfg['blowfish_secret'] = 'this is my passphrase';
    

    And from the phpmyadmin error, it sounds like there might be something wrong with PHP's session configuration. If you can't find anything in the apache error log, make sure PHP is set to log errors by checking these values in php.ini:

    error_reporting = E_ALL & ~E_NOTICE
    log_errors = On
    

    You might also try clearing your browser cookies associated with phpmyadmin (I just solved a separate login issue by doing this).

    sergiom : Thank you for your answer. The php.ini error_reporting and log_errors where already set but I didn't find any useful entry in C:\wamp\logs\apache_error.log. I've already deleted all localhost cookies, which includes phpmyadmin cookies.
    yasin inat : Use php.ini-recommended for wampp installation. check the php.ini file location via phpinfo() then replace all session directives with default values in php.ini-recommended file. & also check the session save path variable. set it correctly to use a writeable directory.
    From Brian

Migrating a NAS while keeping the files and their rights intact ?

Right, what we wish (or need) to do is to migrate a NAS from a old machine to a new one.

problem is, we wish to keep the folder hierarchie intact, and all the settings and shares.

Now I vaguely recall microsoft used to have a tool to do exactly this.

It would copy the folders exactly, with the rights intact, and then remove the shares on the old machine and reactivate them on the new machine.

Now is there anyone who could help me find it again, or perhaps find a better solution for it ?

Much appreciated guys

  • You may be thinking of the Microsoft File Server Migration Kit (FSMT). Here's a link to the download from their site. FSMT has some pretty stringent OS requirements and is designed to move Windows file servers. Depending on the specifics of your NAS, this may or may not be the tool for you. Maintaining the permissions should be straightforward as long as you don't have local trustees contained in the permissions on the source NAS. If you do have local trustees, you'll need to decide whether to create corresponding local trustees on the destination NAS and repermission the data on the destination. The alternative would be to set up domain trustees and repermission using them. Tools like SecureCopy can help with these types of group manipulations as well as moving the data.

    From Fred
  • I ended up using RichCopy. Microsoft tool, works great, decent UI.

    Yes I could've used robocopy / FSMT but this works too.

    Cheers for any help anyways

    deploymonkey : Youve beat me to this one :)... OP, use richcopy, enable advanced functions and go through the settings after that for full customization. Only there can You set owner+rioghts+acl preservation. Also think about using the log feature. You have to set a new file loaction and name as with some systems logfile wasnt written with defauilt location. I copy and verify a second time with another tool just to make sure. You might want to have the original still at hand before You checked rights preservation

How to change RDS licensing mode from 'per user/device' to 'Remote control for administrators' on Windows 2008 R2 server

We have installed windows 2008 R2 enterprise on a Dell server. This server is placed remotely in data center and only administrator is going to access it for maintenance purpose. No multiple users or client remote access is needed

Now during 'remote desktop services' role installation network admin accidentally selected 'per user/device' licensing mode. Because of which now 120 days free try period is ticking. Since only administrator is going to access this server remotely we need to have 'Remote control for administrators' licensing mode (like windows 2003) on it.

How we can change licensing mode from 'per user/device' to 'Remote control for administrators' on 2008 server?

Also will it be possible to do this change remotely using RDC session itself? or do i need to change it using physical console (if remote access is gonna be disabled during switch)?

  • remove the Remote Desktop Services role installation. 2008 R2 supports Remote Desktop for admins WITHOUT THIS ROLE.

    The role basically is only there for the other mode.

    FutureProof : Thanks for the information! It solved the issue...
    From TomTom

Can't connect remotely to Windows Server 2008 R2

I have a new Dell R710 server running Windows Server 2008 R2. I one of it's 4 nic's set up and the rest are not being used. I have successfully given it an ip address, network mask, and dns servers. I can ping and resolve this machine from anywhere else in the network. However, when I try to connect to it via RDP it does several things:

1) it might just outright refuse me with the message, "This computer can't connect to the remote computer. Try connecting again." 2) it might connect me and let me chose the account I would like to log on as... but when you select an account then you receive the same message as in #1 3) it might actually allow you to connect but only for about 1 minute and then you receive the same message and it closes your session.

I have configured the firewall service to allow for RDP over the domain network connection. This didn't have any noticible effect. I have now disabled the firewall for all 3 networks and have even stopped the Windows Firewall service. I am still having the same issue.

I am new to Server 2008 R2 and things are very different. Please give me any advice you can on how to resolve this issue and/or any other gotchas that are sure to come my way. The 2003 -> 2008 learning curve seems steep.

Thanks

Update #1: I appear to be getting disconnected even when accessing network resources (file share) from the server in question. This is a Dell R710 which has 4 nics. I have only one connected and configured. I would think that these connections need not all be set up but perhaps I'm wrong?

Update #2: The same thing happens when I navigate to the default administrative share on the R710 server. About 50% of the time when I navigate from one directory to another I get the same message as stated in a below comment, "The computer can't connect to the remote computer. Try connecting again. If the problem continues, contact the owner of the remote computer or your network administrator". If I continue to try the same directory eventually I get in. Is this some sort of intelligent firewall service?¿

Update #3: I downloaded the latest Broadcom Nexxtreme II drivers for 64bit win 2008. Windows is telling me that my drivers are up to date. I also just tested RDP from the 2008 server to my XP SP3 workstation and it works fine. I would think at this point that the nic is operating fine. I also have several file shares that can be accessed in both directions. I feel as though this must be a firewall issue but when I turn off all firewalls for testing the same symptoms occur. Anyone have any more suggestions?

  • Sounds like your using an XP client to connect, where network level authentication wont work. I haven't found any way around this issue, and I doubt MS will care much as the problem doesn't exist in Vista or Win7. Upgrading XP to SP3 and installing the RDP6.1 client has helped in many cases for me.

    If anyone else has something better, feel free to edit it in.

    JohnyD : I'm connecting from an XP SP3 workstation. However, I am using RDP client v.6.0.6001.18000. I find it strange that it would connect and work for a minute but then disconnect me. I'll get the updated client and try again. Thanks.
    JohnyD : NM, I have a newer version of the RDP client application than the update in the link you provided. Still not sure why this isn't working.
    From Chris S
  • Did you enable remote access? On the start menu right click on computer and choose properties. Click on remote settings and (at least to start) ensure that "allow connections from computers running any version of remote desktop" is selected.

    JohnyD : I have enabled remote access. A dialog said that firewall rules had been created. I connected just now and it logged in fine. After about 10 seconds my session is disconnected with the dialog saying, "The computer can't connect to the remote computer. Try connecting again. If the problem continues, contact the owner of the remote computer or your network administrator." So, I can connect... but something is disconnecting me not long afterwards. I have it set to allow connections from all legacy clients.
    From Jim B
  • 100% chris, I also had the same issue some time back SP3 fixed this, have you tried the the second option "allow connections from computers running any version Remote Desktop (Less Secure)" and see if you still get disconnected

    JohnyD : I have done this and I am still getting disconnected.
    From bonga86
  • Hi JohnyD, based on your 2 updates on this issues, I would advise you to get the latest drivers from Broadcom

    http://www.broadcom.com/support/ethernet_nic/downloaddrivers.php

    Update the drivers and try each ethernet port on that server, goodluck

    JohnyD : I downloaded the latest Broadcom Nexxtreme II drivers for 64bit win 2008. Windows is telling me that my drivers are up to date.
    From bonga86
  • Ok, I now have it working. The 6.1 client did not work for me. The 7.0 client does which can be downloaded here: http://support.microsoft.com/kb/969084/en-us

    In addition to this I had to enable two rules in my Inbound firewall: 1) Remote Administration (RPC) 2) Remote Desktop (TCP-In)

    Hopefully this will help others. Thanks for all your help.

    Edit 1: also, if you want to keep Network Level Authentication for RDP sessions on your 2008 server and you're connecting from your XP SP2/3 workstation than there are some registry changes which you will need to make in order to enable CredSSP. This is from the page: http://support.microsoft.com/kb/951608/

    • Click Start, click Run, type regedit, and then press ENTER.
    • In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
    • In the details pane, right-click Security Packages, and then click Modify.
    • In the Value data box, type tspkg. Leave any data that is specific to other SSPs, and then click OK.
    • In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders
    • In the details pane, right-click SecurityProviders, and then click Modify.
    • In the Value data box, type credssp.dll. Leave any data that is specific to other SSPs, and then click OK.
    • Exit Registry Editor.
    • Restart the computer.
    From JohnyD

Network Printer or Share Printer on Server?

Hi,

Small office, <10 users. USB printer which also has a network port. Is it better to share the printer by plugging the usb into the sevrer, and do a windows share, or use the built in network port?

We are using the built in network port at the moment, but don't have control to delete jobs in the queue that get stuck.

Thanks, Joe

  • For a small office like yours, it depends on how much printing the 10 users do and how "big" a printer it is. The way I read your question, it sounds like it's a smallish printer, maybe something like an HP 2050 (see list of e.g. HP laserjets here), as opposed to a 4000-series.

    If you print enough that you have a workgroup printer, and since you have some sort of a server going, you're better off using that as a print server. Configure it to print to the printer using either USB or network, then share the printer out and the users will print through the print server. The benefit is the manageability, having jobs queue up on the server where you can log them, prioritize them, whatever.

    Unless it's a very big printer, or a very very lightly used one, I wouldn't rely on the printer to queue up the jobs internally.

    Joeme : Thanks very much, it is a HP J6424. I would highly recommend anyone reading this to not buy one, it is a nightmare constant paper jams and problems that require power cycling the printer. I think running it through the windows share will be best as well - for the print queue management. I am looking to buy a replacement printer, as we are printing almost constantly now. Thanks again!
    From Ward
  • I'd recommend plugging it into the network with the network socket, allocating it a fixed IP, then creating a print queue on the server that points to the printer.

    Clients can then connect to the queue on the server (via a nice friendly network name), and they'll have the correct drivers served automatically (assuming this is a windows server). You'll be able to delegate permissions for the printer via the server which means being able to clear out jobs, etc.

    Using the ethernet connection on the printer means it's not tied in proximity to the server, and it's generally a more robust way of hanging it all together compared to USB.

  • This problem has nothing to do with the printer being network-connected. This has been an ongoing problem under Windows for printers connected any way for years.

    See "Solution five: Restart the Print Spooler device" in this HP document on jobs stuck in queue under Windows

    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01893668&cc=us&lc=en&dlc=en&product=3635378

    For the paper jams you mentioned in your comment, see "Solution three: Clean the rollers and the duplexer" near the bottom of this HP Paper Jam document:

    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01312792&lc=en&dlc=en&cc=us&lang=en&product=3635378

    From