Tuesday, January 25, 2011

Syntax checker working crazy with cisco

Hi there,

I'm trying to verify only two commands for cisco IOS ACL Extendend.

They are:

access-list 101  deny   any 192.168.0.0/23 any

access-list 101  permit udp 192.168.1.1 any 

For this i`m using a syntax checker avaliable on the internet located in http://techie.devnull.cz/aclcheck

But when i execute the software for checking, it gives me a error in line 1 saying "destination specification ?". I've read many guides for ACL syntax on the internet but i simply don`t get it, i just cant find this error. The destination is specified in the any keyword.

Is there errors in those commands? Or maybe a more reliable way to validate cisco IOS commands?

Regards

Edit: new commands are

access-list 101  deny   ip 192.168.0.0/23 any

access-list 101  permit udp host 192.168.1.1 any 

Same error.

Notice i've maintained the /23 notation on purpose, check the comments. Will try out with your guys suggestion but if that is the right notation it will break my translators work hehe

  • access-list 101 deny ip any 192.168.0.0 0.0.1.255 any
    access-list 101 permit udp host 192.168.1.1 any

    You had a few problems in here. FIrst, you have to specify a protocol, even if that protocol is just IP. Second problem, you have to use wildcard masks to specify your subnet. Third problem, you must designate either a wildcard mask, any, or preface the host ip address with the word host. Hope this helps.

    I'm not sure about a syntax checker. I just wear out the ? key on my keyboard.

    From Jason Berg
  • In your first line the error comes from

    1. /23, you can't write this ACL like that, you should use wildcard mask
    2. any after deny, this is the protocol field, any is not valid

    In the second line, host is missing before the IP address

    Right syntax is :

    access-list 101 deny ip 192.168.0.0 0.0.1.255 any
    access-list 101 permit udp host 192.168.1.1 any
    

    I will recommend you to use named access-list if possible, writing ACL as you did is a bit old school, harder to manage and more sensitive to errors. A better way to do this is :

    ip access-list extended SOMETHING
      deny ip 192.168.0.0 0.0.1.255 any
      permit udp host 192.168.1.1 any
    
    jaderanderson : ok, the host missing i`ve already fixed but the /23 should work. check http://www.cisco.com/en/US/products/sw/secursw/ps1018/products_tech_note09186a00800a5b9a.shtml#topic2 i`ve changed the any to ip too. will redo the syntax check again
    jaderanderson : Just to clarify, the 101 and 102 notation i`m using only because its standard input output ACL's. Right now i have no intention on testing them on a true cisco but i need to know if they will work. I`ve designed a web app that translates commands for different firewalls and stuff. Thanks for the help
    Zypher : @jader you might not be at the right IOS rev to get the use of the 'slash' notation
    jaderanderson : hummmm... maybe its right zypher. Can you tell me when slash notation was accepted? The aclcheck program was last updated in 2005, with the mask as suggested the program works fine
    radius : @jaderanderson nothing in the URL you provide tells that /X should work, it only tells that it can be *represented* like that but not that you can use this represention in an ACL. As far as I know the /X representation on IOS is only supported for prefix-list.
    jaderanderson : Well, guess you`re right radius. I'll have to implement a reverse mask by myself then since the 0.0.1.255 worked on aclcheck
    From radius

Linux DHCP 2 networks

I have 2 networks (192.168.1.0/24 & 192.168.2.0/24), and 1 DHCP server.
I need to serve DHCP for 2 networks, but it doesn't work. Second network (192.168.2.0/24) (PC & Notebook) always gets IP from 192.168.1.0/24 range despite that it has 2 ranges. How I should to configure it?

Router OS: Ubuntu Server 10.04
Client OS: Ubuntu 10.04

See UML

  • The problem with two subnets is, how the dhcp server should decide which one to provide. I would give defined host entries for you computers:

    host notebook {
        hardware ethernet 00:AB:CD:EF:GG:GG;
        fixed-address 192.168.2.10;
    }
    
    Chris S : No, the DHCP server will give an address that corresponds with the NIC where the request was received. This applies by extension to DHCP Proxies, using the NIC where the original DHCP Discover was received.
    From TooAngel
  • Is the DHCP server dual-homed (has an IP address on both networks)? Your DHCP server should have an interface listening on each subnet: one listening to the 192.168.1.0/24 network and the other listening to the 192.168.2.0/24 network. The startup for dhcpd should be configured to listen on each interface.

    Dhcpd also has the option to create a shared network containing both subnets, but I think this will essentially pool your addresses and you won't be able to control which machines receive addresses for a particular subnet.

    Chris S : +1, You can also use a DHCP proxy if the server isn't on the subnet you want to serve, but it's generally not the best idea.

nginx rewrite not working?? it simply ignores it

hi all, i have a wierd problem with nginx, it doesn't want to rewrite...

I have this configuration and i need to pass a hash (40 chars) to a php file it works with apache mod_rewrite but with nginx it isn't i even tried to do simple rewrites, it simply doesn't work

server {
.........
        location / {
            rewrite ^aa$ /downloadTORRENTZ.php break;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            rewrite  "^([A-Z0-9]{40})$" /file.php?ddl=$1 break;
        }
}
    1. request usually start with / so your regexp should look like

      rewrite "^/([A-Z0-9]{40})$" /file.php?ddl=$1 break;

    2. Is your hash ALLCAPS? Maybe you should use [a-zA-Z0-9]

    3. 40 characters... It looks like sha1 hash. May be you should simplify regexp to [0-9A-F]

Process runs slower as a scheduled task than it does interactively

I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2.

What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long.

Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

  • Maybe scheduled tasks run at a lower priority by default.

    Use prio to force higher priority.

    Charlie : You're right that they run at lower priority, but as I mentioned, I've already accounted for that. Unless there's some priority other than process priority that I don't know about.
    From mcandre
  • If you set it to run as a scheduled task as User X, and then login as User X before it's supposed to run, it should open a window in your session when it runs, it'll be running in your session.

    If you do this, does it take the longer, or shorter, period of time? I don't know what this will mean, but it may be a useful differentiator. Could there be some network access that the user account has when logged in, but not when running as a scheduled task, that needs to timeout and fail? Is the behavior different if you create a new user account and have it run as a scheduled task under that account?

    Another idea: When running it as a scheduled task - now that you've fixed the priority on your script do the sub-processes all run as Normal, or Below-Normal?

    Charlie : The subprocesses definitely run as Normal, as verified by procexp/taskman. I think the network thing isn't it, because it doesn't do any substantive network access, but I'll double-check that. The idea about running the task as interactive is interesting too, I'll give it a try.
    mfinni : The process itself doesn't have to do any network access that you know of for this to be a problem. Read this post from Sysinternals and see how something seemingly unrelated can cause hangs/slowness. http://blogs.technet.com/b/markrussinovich/archive/2005/08/28/the-case-of-the-intermittent-and-annoying-explorer-hangs.aspx
    From mfinni
  • at first - you can use more than normal priority (High for example)

    at second - you have to understand, than foreground session takes some resources, mainly hard disk IO and memory so scheduled task gets less. for clear benchmark you have to logoff while you powershell script running

    and also you can try add more memory / use ramdrive / split work on several hard disk to speed up processes

    Charlie : Thanks for the ideas. The machine has plenty of memory and IO capacity, the problem is just that things run slower as scheduled tasks than they do interactively. When the scheduled task is running, there is usually no interactive logon session, so I think that's not the issue either.
    From evg345
  • It appears that there is more than just "regular" process priority at work here. As I noted in the question, the task scheduler by default runs your task at lower than normal priority. This question on StackOverflow describes how to fix any task to run at Normal priority, but the fix still leaves one thing slightly different: memory priority. Memory priority was a new feature for Windows Vista, and is described in this Technet article. You can see memory priority using Process Explorer, which is a must-have tool for any administrator or programmer.

    Anyway, even with the scheduled task priority fix, the memory priority of your task is set to 4, which is one notch below the normal setting of 5. When I manually boosted the memory priority of my task up to 5, the performance was on par with running the process interactively.

    For info on boosting the priority, see my answer to a related StackOverflow question about IO priority; setting memory priority is done similarly, via NtSetInformationProcess, with PROCESS_INFORMATION_CLASS set to ProcessMemoryPriority (the value of this is 39 or 0x27). I might make a free utility that can be used to set this, if others need it and don't have access to programmer tools.

    EDIT: I've gone ahead and written a free utility for querying and setting the memory priority of a task, available here. The download contains both source code and a compiled binary.

    MadBoy : Glad you found it, thanks for sharing.
    From Charlie
  • I don't have sufficient rights to comment so I'm commenting here.

    Charlie, I'd be very interested in a utility like you descibe above for setting memory priority.

    Thanks,

    Tom

    Hi Charlie,

    I looked at the utility and it seems to work great. If possible, I'd love to see one more feature added. Hey, I know beggers can't be choosers but I thought I'd ask anyway ;-)

    As I usually run batch files from the scheduler, it woyuld be great if when I didn't specify a PID, it could use the PID that it was called from. Or, some token like $$ could be used to tell the program to use the calling PID. This way, I wouldn't ever have to know the PID and, since what I'm really executing inherits the priority from the DOS shell, it would run as normal priority as well. So my syntax in the batch file could be something like

    ProcessPriority set $$ normal

    myProgramName parameters

    While it's possible to get the current PID in a batch file, it's awkward.

    Thanks,

    Tom

    Charlie : I'll look at doing this over the weekend; I'll post here when I have it done.
    Charlie : See comments on my answer above for a link to the utility I described.
    Charlie : Unfortunately it's not terribly easy to get the PID of the parent process, but you might consider switching to Powershell (instead of Batch), in which case you can get the script process ID using this expression: `[Diagnostics.Process]::GetCurrentProcess().Id`.
    From

Linux boot: Can I prevent the loading of a module using a boot parameter?

Hello, everyone!

I know that I can blacklist a module in /etc/modprobe.conf or /etc/modprobe.d/blacklist, but I have a nasty module which loads before the filesystems are mounted (except /boot of course), so I assume, /etc will not yet be read by then.

Can I prevent a module from loading using a kernel boot parameter?
(I'm using GRUB)

Or are there other ways to do this?

Thank you!

  • This Ubuntu site has a nice breakdown of options:

    https://help.ubuntu.com/community/BootOptions#Common%20Boot%20Options

    While there aren't any ways to tell the kernel to not load a module at boot time, you can get in the way later on down the road. Take a look at the break= optionss, that change initrd behavior. If you don't know the exact module, you can perhaps use these to further isolate it.

  • Modules that load that early in the boot sequence are built into the initramfs; it seems likely that you can run update-initramfs -c -k your_kernel_version to ensure that the blacklisted module isn't loaded in that initramfs image.

    jrod : *I don't actually use Ubuntu.
    From jrod

What is the best approach for setting up a good, secure smtp server on Debian?

Which server do you recommend? (postfix?) Which configuration for authenticating users for sending mail? Which setups for making it inbox friendly (dkim, spf, domainkey etc)?

Is there some detailed guide to walk a newbie through all or most of these?

Any advice would be much appreciated.

  • Setting up an SMTP server is not for the faint of heart - especially so if you are a newbie.

    If this is for real production use, I would recommend out-sourcing your SMTP needs to some online service. However, if it is for personal/hobby use, you can look up a tonne of online tutorials on how to do things.

    In my experience, the easiest mail server to setup on Debian is Citadel. Straightforward apt-get, provides a web-based interface and command-line interface for configuration and mail access. Supports all kinds of features.

    Good luck.

    From sybreon

Why is Syslog Not Writing Logs To The Designated Files?

I've been trying to route Apache's logs through Syslog (for some reason log rotation had stopped, and using Syslog and logrotate seemed a reasonable solution).

I have sent Apache's error logs to local7 and piped the access logs to local6 via the logger program.

I want Syslog to write the error and access logs to /var/log/apache2/error.log and /var/log/apache2/access.log respectively.

To that end I have added the following to /etc/syslog.conf:

# Logging for Apache using local7 facility for error messages
# and local6 for access log
# Added 20/06/2010 by Chris Bunney
local7.*                        /var/log/apache2/error.log
local6.*                        /var/log/apache2/access.log

I know that the error and access logs are being sent to Syslog correctly because they are showing up in /var/log/syslog, however they are not being written to the files I want.

The original file permissions of the target files:

-rw-r----- 1 root adm       0 2010-06-20 23:01 access.log

The current file permissions of the target files that I have been using to try and rule out such things causing issues:

-rw-rw-rw- 1 syslog adm       0 2010-06-20 23:01 access.log

Everything looks fine to me, so why aren't the messages Syslog is receiving being written to the files I want? Have I missed something simple?


Full Output of cat /etc/syslog.conf:

#  /etc/syslog.conf     Configuration file for syslogd.
#
#                       For more information see syslog.conf(5)
#                       manpage.

#
# First some standard logfiles.  Log by facility.
#

auth,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none          -/var/log/syslog
cron.*                          /var/log/cron.log
daemon.*                        -/var/log/daemon.log
kern.*                          -/var/log/kern.log
lpr.*                           -/var/log/lpr.log
mail.*                          -/var/log/mail.log
user.*                          -/var/log/user.log

#
# Logging for the mail system.  Split it up so that
# it is easy to write scripts to parse these files.
#
mail.info                       -/var/log/mail.info
mail.warn                       -/var/log/mail.warn
mail.err                        /var/log/mail.err

# Logging for INN news system
#
news.crit                       /var/log/news/news.crit
news.err                        /var/log/news/news.err
news.notice                     -/var/log/news/news.notice

# Logging for Apache using local7 facility for error messages
# and local6 for access log
# Added 20/06/2010 by Chris Bunney
local7.*                        /var/log/apache2/error.log
local6.*                        /var/log/apache2/access.log

#
# Some `catch-all' logfiles.
#
*.=debug;\
        auth,authpriv.none;\
        news.none;mail.none     -/var/log/debug
*.=info;*.=notice;*.=warn;\
        auth,authpriv.none;\
        cron,daemon.none;\
        mail,news.none          -/var/log/messages

#
# Emergencies are sent to everybody logged in.
#
*.emerg                         *

#
# I like to have messages displayed on the console, but only on a virtual
# console I usually leave idle.
#
#daemon,mail.*;\
#       news.=crit;news.=err;news.=notice;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       /dev/tty8

# The named pipe /dev/xconsole is for the `xconsole' utility.  To use it,
# you must invoke `xconsole' with the `-file' option:
#
#    $ xconsole -file /dev/xconsole [...]
#
# NOTE: adjust the list below, or you'll go crazy if you have a reasonably
#      busy site..
#
daemon.*;mail.*;\
        news.err;\
        *.=debug;*.=info;\
        *.=notice;*.=warn       |/dev/xconsole
  • Did you restart syslogd? You can also use lsof -f -p <pid-of-syslogd> to see what log files it has open. The syslog.conf looks right, you might want to post your apache configuration.

    chrisbunney : Yep, I missed something simple. I must have restarted everything *but* syslog. Restarted Syslog and it worked fine. Sorry, but I don't have the rep for an upvote. Now, I think I ought to go to bed, the lack of sleep is obviously having negative effects...
    From delimiter

echo based on grep result

I need a one liner which displays 'yes' or 'no' whether grep finds any results.

I have played with grep -c, but without success.

  • How about:

    uptime | grep user && echo 'yes' || echo 'no'
    uptime | grep foo && echo 'yes' || echo 'no'
    

    Then you can have it quiet:

    uptime | grep --quiet user && echo 'yes' || echo 'no'
    uptime | grep --quiet foo && echo 'yes' || echo 'no'
    

    From the grep manual page:

    EXIT STATUS

    Normally, the exit status is 0 if selected lines are found and 1 otherwise. But the exit status is 2 if an error occurred, unless the -q or --quiet or --silent option is used and a selected line is found.

    From Weboide
  • I don't think u can do it with grep alone. Is it possible to put a bash script around it?

    kaerast : The question wasn't whether grep can do it alone, the question was how to do something based on the results of grep.
    From Pimmetje
  • Not sure what you mean by "one liner", for me this is a "one liner"

    Just add ; if [ $? -eq 0 ]; then echo "Yes"; else echo "No"; fi after you grep command

    bash$ grep ABCDEF /etc/resolv.conf; if [ $? -eq 0 ]; then echo "Yes"; else echo "No"; fi
    No
    bash$ grep nameserver /etc/resolv.conf; if [ $? -eq 0 ]; then echo "Yes"; else echo "No"; fi
    nameserver 212.27.54.252
    Yes
    

    Add -q flag to grep if you want to supress grep result

    bash$ grep -q nameserver /etc/resolv.conf; if [ $? -eq 0 ]; then echo "Yes"; else echo "No"; fi
    Yes
    
    From radius
  • This version is intermediate between Weboide's version and radius's version:

    if grep --quiet foo bar; then echo "yes"; else echo "no"; fi
    

    It's more readable than the former and it doesn't unnecessarily use $? like the latter.

Emulate a SMP server with a Linux Cluster?

Is it possible to combine a few machines into a cluster and have it appears as a single server? For example, with such a cluster we can run a 32-thread CPU-bound process on 8 quad-core machines.

Are there any existing software that would allow this? The only thing that I'm aware of is MOSIX, but I'm not sure if it works.

I'm understand that it might incur huge performance overhead. However, I still want to try :)

Nginx + PHP FASTCGI FAILS - how to debug ?

I have a server on AMAZON EC2 running Nginx +PHP with PHP FASTCGI via port 9000.

The server runs fine for a few minutes and after a while (several thousands of hits in this case) FastCGI Dies and Nginx returns 502 Error.

Nginx log shows

 2010/01/12 16:49:24 [error] 1093#0: *9965 connect() failed (111: Connection refused) while connecting to upstream, client: 79.180.27.241, server: localhost, request: "GET /data.php?data=7781 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "site1.mysite.com", referrer: "http://www.othersite.com/subc.asp?t=10"

How can I debug what is causing FastCGI to die?

  • You can start by examining what PHP prints on standard error when it dies. If that sheds no light on the matter, then it might be enlightening to hook up a strace to the PHP processes and watch them do their thing, and then look at the last things it does. Running your FCGI processes under a competent process monitoring framework such as daemontools is also a good idea -- it's how I run all my PHP processes under nginx.

    From womble
  • I realize that the OP may have moved on by now, but if somebody came here with the same problem, I hope this helps.


    In a default setup, NGINX runs as the user "nobody" whereas spawn-fcgi spawns php-cgi children as the user "root". So, NGINX is unable to connect to fastcgi://127.0.0.1:9000 with it's current permissions. All you have to do is change the spawn-fcgi command a little bit to fix this.

    spawn-fcgi -a 127.0.0.1 -p 9000 -f /usr/bin/php-cgi -C 5 -U nobody
    

    Or you could use a UNIX socket (I prefer this method)

    spawn-fcgi -s /var/run/fcgi.sock -f /usr/bin/php-cgi -C 5 -U nobody
    

    And change your fastcgi_pass in nginx.conf to this:

    ...
     location ~ \.php$ {
            fastcgi_pass   unix:/var/run/fcgi.sock;
            fastcgi_index  index.php;
            include        fastcgi_params;
            fastcgi_param SCRIPT_FILENAME /usr/local/nginx/html$fastcgi_script_$
        }
    ...
    
    From

Git SSH key-pair stopped working after a location change

Hi all, I have used the same public key scheme to ssh to my server that hosts its own git repository. Recently, I changed the location from where I work from (different IP), and now git asks for my password every time I log in.

I'm using Windows to connect to my server, pageant to keep track of the authentication.

I've looked in the auth log files on my server, and each login shows that the public key WAS indeed accepted, yet I am still getting prompted for the pswd on every action.

Any ideas?

  • Private key access can be restricted by host. Verify that the public key stored in ~/.ssh/authorized_keys doesn't have a qualifier restricting which hosts it is valid for.

    Michael : wasn't an issue, thanks though

Tracking or Locating a Server online?

Is there any way you can locate the Server of a Webpage? I am wondering this in case I want make a webpage on the same server or something like that.

  • You can often determine the owner or operator of a web site by using the whois command or a whois server.

    whois example.com
    

    Or try this Verisign whois server (one of many).

  • Your question is not very clear. If you want to lookup IPs you can use also use tools such as MaxMind's geographic IP location database or ARIN's whois lookup.

    From Warner
  • You are looking for a way to know where the server is actually hosted (the actual hosting provider).

    You can try this (using bash):

    whois "$(dig +short www.codealpha.net | grep -m1 '^[0-9]')"
    

    In this case, this returns

    OrgName:    Linode
    [...]
    

    This can be tweaked and improved.

    Here's a way to do it in a bash function:

    #!/bin/bash
    
    function whathost()
    {
      whois "$(dig +short "$1" | grep -m1 '^[0-9]')"
    }
    
    whathost linux.com
    

    Notes:

    Note that some results will probably not meaningful, this is because they own the IP address and host their own server.

    From Weboide

traceroute does not work, output is * * * but network is fine.

Hi, on my Linux box, traceroute does not work. the output is like this:

$ traceroute google.com

traceroute to google.com (209.85.231.104), 30 hops max, 52 byte packets
1  * * *
2  * * *
3  * * *
4  * * *

Can anyone tell me why its not working? any possible reasons behind it?

  • it could be that a firewall upstream from you is blocking the UDP packets Traceroute

    On modern Unix-like operating systems, the traceroute utility by default uses UDP datagrams with destination ports numbering from 33434 to 33534.

    radius : $ is more likely unix prompt rather than windows one and windows traceroute command is tracert. And the question tells about Linux...
    lalalalalalala : i realised this so i edited the answer.
    dbasnett : I thought trace route used ICMP packets. I did not know that unix based systems use UDP. Learn something new everyday.
  • This is probably because icmp TIME_EXCEEDED answer is filtered by the router/firewall that you use as default gateway or by your Linux system itself

    From radius
  • Try to use -T (tcp) or -U (udp) to bypass firewall.
    Some routers/firewall don't let icmp echo pass trough, that's why you'd use those 2 to by pass them.
    Anyway, contrary as stated in wikipedia, on my debian boxes traceroute still uses icmp packets and not udp.

    EDIT

    I was wrong...it uses udp...the icmp coming back are for an unreachable port...sorry

    From Pier

In simple language, if I install and use Sphinx (GPLv2) on my co-located server for a commercial LAMP website do I need to buy a license?

Do I have to license Sphinx for a commercial website? I don't need to modify the code... just use it with MySQL.

  • No.

    From: http://www.sphinxsearch.com/licensing.html

    An important instance when a commercial license is not required generally occurs when one uses Sphinx for a Web site or a hosted service. For example, even if one provides a paid search services to their end users, a commercial license would not be required

    From LukeR

optimizations for a server with nginx as the front and apache as the back?

Are there any general optimizations for this sort of setup? I read something about nginx not using the most recent protocol in http, so are there things that I could tune apache for? Also, is there relationships between the keepalive attributes of each webserver or maxconnections and such?

I'd hate to optimize one the correct way and have the other stopping those optimizations from meaning anything.

Where do I start?

  • If you're proxying traffic to apache from nginx then the only place you need to worry about keepalives is in nginx. Nginx serves HTTP/1.1 to clients, but can only proxy HTTP/1.0, which doesn't support keepalive so there'll be no keepalive between apache and nginx.

    Nginx will run several thousand connections without blinking, so start with setting a nice high keepalive timeout and reduce it if necessary.

    The main issue you need to look at when configuring is that both apache and nginx can handle enough simultaneous connections - depending on your setup, you'll require several times the number of nginx connections to apache connections.

Does anyone have a valid and working example of OpenLDAP meta backend?

I have been Google'ing my fingers off and simply can not find a working example of how to merge/proxy a OpenLDAP server and windows AD server. Have anyone worked with this before? Any suggestions would be appreciated.

The idea is simple:

openldap.mydomain.local ----> Linux LDAP Server

winad.mydomain.local ----> Windows AD Server

Some users are one Linux and some on WinAD. OpenLDAP should search both on login. A working example would be appreciated.

system monitoring software?

Possible Duplicate:
What tool do you use to monitor your servers?

Hi,

i'm a newbie at server administration, for that my questions :P

i'm needing a system monitoring software that work on centos 5 that basically send me a daily report of:

*server loads on the day, maximum and minimum and when happened, also what service what consuming more resources *disk space available *bandwidth used (if is possible)

and urgent report at this situations:

*when any of the process are unable to start or are down (as mysql, apache, proftpd etc..) *when there is a high load and if is possible what service is causing it *when there is too much login attempt to a ftp/ssh accounts or a specified port

if the app can send a sms alert also is better but is not critical.

thanks for the help

How can I test if LVM works on Ubuntu Lucid?

I got LVM2 installed on Ubuntu Lucid. I have a volume group on /dev/fluid with free space (150Gb). I need to know if LVM is installed and working properly.

How can I test that LVM is working properly?

Thanks

Edit:

I am probably looking for a way to read/write a file in a test volume.

Here's my volume group info, in case it helps:

  --- Volume group ---
  VG Name               fluid
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               151,09 GiB
  PE Size               4,00 MiB
  Total PE              38679
  Alloc PE / Size       4864 / 19,00 GiB
  Free  PE / Size       33815 / 132,09 GiB
  • There are several commands you can use:

    pvs - lists physical volumes
    vgs - this lists the volume groups
    lvs - lists logical volumes
    

    What is /dev/fluid ?

    UPDATE

    What you want (I think) is to mount your logical volume.

    ls /dev/mapper
    

    Then mount:

    mount /dev/mapper/fluid-{something-you-found-above} /mnt
    
    Weboide : /dev/fluid is my volume group, see edit.
    From cstamas
  • If the logical volumes are mountable and you can write files to them, it's working. You can use the commands that cstamas mentions for more info about how your volumes are set up.

    From Daenyth
  • I am probably looking for a way to read/write a file in a test volume.

    Well, then just format your LV with a file system of your choice, mount it and touch a file on it.

    From joschi
  • Okay I think I found how to do this, this is pretty much what I was looking for:

    sudo lvcreate -L50M -n test fluid
    sudo mkfs.ext3 /dev/fluid/test
    sudo mkdir /media/test
    sudo mount /dev/fluid/test /media/test/
    

    And then I copied some file on it and checked the checksum.

    source: http://www.nikhef.nl/~dennisvd/lvmcrap.html

    From Weboide

SSL certifcates

I need to implement a certificate on a web server (apache). The requirement is that it must work with ssl, so I need to purchase a certificate. I would like to avoid having to buy a certificate at least for now. From your experience does anybody recomend any free certificate company, who I can count on and is known to be good and reputable? I Would appreciate answers based only on experience of using such companies.

Thank you all.

  • StartSSL (http://www.startssl.com/) offers a free certificate that is not self-signed. I have seen elsewhere on ServerFault of people who have recommended this service in the past.

    Another option, if you're just testing, is to use a self-signed certificate.

    msanford : +1 self-signed.
    From James
  • Good for you, as your requirements are easy to satisfy, either with self-signed, private CA or public CA.

    The most well known public CA for this it CAcert where you can sign up and issue certitificates to any domain you own, for free! (Although donations are encouraged).

    If you want to "get your hands dirty" you can consider one of these options also: run a software like XCA or just use OpenSSL to create a certificate which is self-signed or signed with a private CA (there is a difference). Both of these software are cross-platform. XCA is a GUI program that is easy to use but not terribly well documented. OpenSSL, a command-line tool, has been around for ages and is well documented but not really that easy to use.

    redknight : I think I will go with cacart if has a proven that it is a good provider. thank you.
    From delimiter
  • Quick and easy:

    Generate key pairs:

    1. openssl genrsa -out www.myexample.com.key 2048

    GEneate CSR:

    1. openssl req -new -key www.myexample.com.key -out www.myexample.com.csr

    Sign CSR with private key and certificate valid for 3650 days:

    1. openssl x509 -in www.myexample.com.csr -out www.myexample.com.crt -req -signkey www.myexample.com.key -days 3650

    In httpd.conf file, there is 'Include' parameter, which includes file for httpd-ssl.conf or something called ssl.conf -depending on your installation.

    You need to copy the above three files to your apache's conf directory (somewhere you can reference them in the conf file). In httpd-ssl.conf or ssl.conf file you need to update the location as follows:

    SSLCertificateFile /etc/httpd/conf/www.myexample.conf.crt

    SSLCertificateKeyFile /etc/httpd/conf/www.myexample.com.key

    That is it.

How to map a drive to another domain using command prompt?

How can I map a drive to another server on another domain using net use? I tried using these commands but they didn't work. Currently our company is setup with OpenVPN if this matters

net use H: \\servershare /user:domainuser password /persistent:yes

net use n: \\servernamesharename /user:username@domainname password

net use n: \\servernamesharename /user:domainnameusername password

The user is able to see the folder but could not get in and she has her VPN open yet could not get in. I check the folder permisions in the server and everything looks ok from our end. Currently the user is located in Mexico. Is there any other way to do this? I would really appreciate if someone can help I already spent a couple hours searching, but I haven't had success.

  • you need a \ to separate the domain & user,

    e.g. DOMAIN\USER

    net use n: \\servername\sharename /user:domainname\username password
    
  • Remember that the vpn software may be blocking smb traffic, as may be the server to which you are connecting.

    If the server is, say, windows 2008 and it has a firewall rule to allow smb connections from the localsubnet but not from other subnets then your connection will fail.

    If you want more info you'll have to specify what you mean by 'didn't work'

    From Ian

Unable to add an iptables RULE

Please help guyz, I am not able to add iptables rule..

On the computer on which I have to login, Shoreline is installed.I know I can add rule to /etc/shoreline/rules but I decided to manually enter an iptable rule by typing:

/sbin/iptables -A local2fw -s 10.100.98.74 -p tcp -m tcp --dport 22 -j ACCEPT 

FULL iptables-save output is here(before command) After I executed the command, OUTPUT

Then why am I not able to login using 10.100.98.74... I get connection refused error... And I can login ssh using other ips listed in the rule...

Tell me what more info do u need.. What can be the probable cause?

  • What can be the probable cause?

    The most likely reason this isn't working is the order of the rules.

    See:

    -A local2fw -s 10.100.56.42 -j ACCEPT 
    -A local2fw -j all2all 
    -A local2fw -s 10.100.98.74 -p tcp -m tcp --dport 22 -j ACCEPT 
    

    By issuing a -A local2fw your rule is being appended to that change. But if you look the last rule on the chain before you add that rule sends everything to a different chain. By appending that rule after the jump nothing gets to the rule.

    You could trying passing -I option instead of -A to insert the rule at some position.

    Shadyabhi : You are right.. That solved the issue. Thanx
    From Zoredache

Redirect 301 or RedirectMatch 301 not working for me

So this is my scenario:

I have 1 static IP and 2 servers. 1 server is a web server the other is a mail server. I have a router as a hardware firewall with all the ports are set that require passthrough to internal ip addresses .

If a user types the url http://www.domain.com they see that website. If the user wants to access webmail they type in the url http://mail.domain.com but still see http://www.domain.com

I have set the webmail domain to be accessible via port 8080 on the mail server and if the user types the url http://mail.domain.com:8080 it works no problem but not with http://mail.domain.com.

So this is my issue:

In my httpd.conf I want to setup a Redirect 301 so that when the user types in http://mail.domain.com they get redirected to http://mail.domain.com:8080

I'd prefer not to use .htaccess and keep the directives in the httpd.conf

Thanks...

  • There is at least 2 way to do this:

    1. Create 2 virtual host one for www.domain.com and one for mail.domain.com and then put your RedirectMatch directive in the second one.
      You could also use the ProxyPass directive in the mail.domain.com virtual host to make it working like a reverse proxy and have mail.domaine.com:80 working directly without redirection

    2. Use mod_rewrite like this to only redirect for the mail.domain.com url with a rule like that

      RewriteCond %{HTTP_HOST} ^mail.domain.com$ [NC]
      RewriteRule ^(.*) http://mail.domain.com:8080/$1 [R=301]

      with mod_rewrite you can also do a reverse proxy with a rule like (mail.domain.com should resolve to the internal IP)

      RewriteCond %{HTTP_HOST} ^mail.domain.com$ [NC]
      RewriteRule ^(.*) http://mail.domain.com:8080/$1 [P]

    From radius
  • You'll need to create a virtual host on your main web server that responds to web requests for the "mail.domain.com" address. Take a look at the Name-based Virtual Host documentation on the Apache website. You'll also need a mod_rewrite rule to do the redirection. Your configuration would look something like this:

    NameVirtualHost *:80
    <VirtualHost *:80>
      ServerName mail.domain.com
      RewriteEngine On
      RewriteRule (.*) http://mail.domain.com:8080/\1 [R=301,L]
    </VirtualHost>
    
    radius : Rewrite is useless if a virtualhost is created, RedirectMatch will be probably faster
    troyengel : what are you talking about radius? Rewrite is perfectly fine in a virtualhost as listed above. The code is a little broken (use $1 not \1) but other than that it's good.