Monday, January 10, 2011

Copying data from server to PC on the same network

I am recently setup a server to host my "data" movies and music and such.

I was trying to copy the data back to my home computer using this command

scp files/on/server user@homecomputer:/home

the response was unknown user@computer, then i replaced with the ip address still same difference. How can i fix this?

Please and thank you.

  • Could you post the exact error message? Perhaps the user-name on your home computer is another one?

  • It's very unlikely that you have write access to /home (you'll need to specify something like /home/user instead).

    From JanC
  • Do you have the openssh-server package installed on the other computer?

    From frabjous
  • I can't give a definitive answer without knowing your network organization (which you should have described more precisely), but here's some information that may help you.

    You mention a “PC on the same network”, but also a “home computer”. Is your “home computer” really on the same network as the server? If not, maybe the home computer is behind a firewall and you can't initiate a connection from the server to the home computer, only from the home computer to the server. In particular, if the home computer is behind a NAT and the server is outside the NATed area, the server simply can't see the home computer.

    Other answers have noted potential problems with your command. If you had cut and pasted the exact error message (which you should have done), it would have been possible to tell whether these potential problems were actual problems.

    In any case, reverse ssh connections (i.e. ssh from home to server then back from server to home) are hard to manage (potential firewall trouble, potential authentication trouble, need to run an ssh server locally). So instead of initiating the copy from the server, initiate it from the home computer.

    Furthermore, scp is not a particularly good tool to copy a large number of files. If you want to make a one-time copy, or if you're always going to copy files from the server to the home computer and never the other way round, use rsync. If you want to keep the two computers synchronized, use unison — it's easier to use and less error-prone than rsync for two-way synchronization.

    From Gilles

What's the Bitwise Tunnnlier equivalent for Ubuntu?

I used Tunnlier in windows and it was perfect. After migrating to Linux I, surprisingly, can't find anything that does the following:

  • manage my SSH connections
  • use Terminal and SFTP browser
  • save my connections as profiles to load later

I found PuTTY and gSTM, but they really don't do what I mentioned above.

What do you recommend?



moved here from superuser

From ubuntu takpar
  • Nautilus (Applications -> Accessories -> File Browser) provides support to connect to ssh servers and browse files over sftp and to save the connections as profiles (Saved data includes server name, share name, username, password).

    To open sftp, go to Places -> Connect to Server... and choose "SSH" or FTP as the type and enter the rest of the details - see this guide for detailed steps. By giving a bookmark name, this connection profile will be stored for easy access in the left-hand side pane (View -> Side Pane).

    Only thing I don't know a way to do within nautilus is "Open a terminal", for which u can use Putty, etc. Agree it is not yet as integrated as the option you mention, but it may be possible to find a way to "Open a Terminal" while on an sftp location to open an ssh connection in a terminal and jump to the directory.

    takpar : the main problem is the SSH. i want an application that i just asked for a profile and it connect and open a terminal for me. (i don't being asked for a password or IP each time)
    Ravi Vyas : you can bookmark the connections you make using "connect to a server" and if and when you need the command line you can use the terminal "ssh user@domain.com"
    Gilles : @takpar: to avoid being prompted for a password, use ssh key authentication instead (see the link in Source Lab's answer). Even if you use a password for the key (which is better security), you'll only be prompted once per session.
    From koushik
  • Afaik there is no program that can do this for you for Linux. You can do this on Linux, but there isn't a pretty GUI for doing it.
    Password-less login can be done by using ssh-keys (You might still want a password for you ssh-key, but you only have to enter it once!) Have a look here.

    By configuring the ssh client on a host basis you can have individual settings for different hosts. Have a look at the file in /home/user/.ssh/config (it might not be there, but just create it.) Mine looks something like this:

    CheckHostIP yes
    ConnectionAttempts 3
    ServerAliveInterval 10
    
    Host router
            HostName 10.0.0.1
            User root
    
    Host test
            HostName test.example.org
            User test32
            ForwardX11Trusted yes
            ForwardX11 yes
            Compression yes
            CompressionLevel 6
    
    Host lucretia
            User lasse
            HostName 8.8.8.8
    
    Host home
            User coax
            HostName 8.8.8.9
    
    Host lovelace
            User lasse
            HostName 8.8.8.10
    
    Host mailserver
            User lasse
            HostName 8.8.8.11
            ForwardX11 yes
            ForwardX11Trusted yes
    

    Everything before the first Host deceleration is common to all connections. For more options look at the man-page for ssh_config.

    When you have set up the config file then you can use ssh home instead of ssh 8.8.8.9 -l coax These options also applies to nautilus for ssh:// browsing.

    You then have two options for quick launching a ssh terminal session, one is to create a gnome-terminal session for each and create launchers that runs gnome-terminal --profile='profile-name'.
    The other option is to install sshmenu, imho not a super app, but it does a good job of supplying quick access to remote terminals.

    From Source Lab
  • Try HotSSH (found in the repositories). It manages the ssh connections very nicely, including connection sharing. I don't think it handles SFTP, though as someone else has noted, you can do that through Nautilus.

    From Nerdfest
  • Pretty simple: PAC Manager. Download from http://sourceforge.net/projects/pacmanager/ or, even better, add the GetDeb page to repositories and then update and apt-get install pac

    takpar : i looked at it's shots. it seems to be great!! i will try it... +1 for intro
    takpar : i tested it. it was great! i'm surprised no one next to me knew about it!
    From perseo22
  • You manage the different connections in different programs that use them. Here's what you do:

    • add the SSH as a mount using nautilus (Locations -> Connect to server) and mark 'add bookmark'

      • put your public key in the .ssh directory on the server. It's not smart to keep reconnecting with your real password all the time. You should be using a password less setup, if you want to prevent any man in the middle attacks.

      • gnome-do automatically indexes known ssh hosts, so you can connect quickly using that. There is also an ssh applet, and you can offcourse just add a few aliases to your bash.

    From Ralf

Server Converting Files for iPhone

I would like to convert videos I download to the iPhone format on my server and be able to access and play them on my iPhone.

I looked into Handbrake, but I wasn't sure if it would work on the server.

Any suggestions you may have to set this up would be much appreciated.

Please and thank you.

  • There is a command line version of handbrake that should work on your server. Below are instructions for installing from a PPA and converting the files using the command line.

    $ sudo apt-get install python-software-properties
    $ sudo add-apt-repository ppa:stebbins/handbrake-snapshots
    $ sudo apt-get update
    $ sudo apt-get install handbrake-cli
    $ HandBrakeCLI --preset "iPhone & iPod Touch" -i input.xxx -o output.mp4
    
    garbagecollector : no such command `add-apt-repository`
    garbagecollector : could you also tell me where you got the cli interface for handbrake?
    Marco Ceppi : @garbagecollector What version server do you have?
    garbagecollector : 10.10 i know that command only works with 9.10
    Marco Ceppi : Not true - that command should work on 9.10 and 10.04 (at least those are the two I've tested) Make sure you have `python-software-properties` package installed
    garbagecollector : that did the trick. thanks! :) i am sorry i have 10.04 server running. :)
    fluteflute : I've amended by post to allow for the installation of `python-software-properties`
    Marco Ceppi : @garbagecollector - it's an odd thing because that package is standard on `ubuntu-desktop` never realized it was optional on the server deployment
    fluteflute : I guess most PPAs are for desktop packages so it does make sense.
    From fluteflute
  • I don't have an iPhone anymore, but back when I had one, I made a little script to do just that. Here it is :

    #!/bin/bash
    if "$1" == ""
    then
        echo This script this script takes a video file as parameter, and tries
        echo to convert it to MPEG-4 in an iPhone-compatible format.
        echo A file list, or wildcards caracters can be used as parameters.
        exit 0
    fi
    
    for file in $@
    do
        ffmpeg -i $file -f mp4 -vcodec mpeg4 -maxrate 1000 -b 700 -bufsize 4096 -g 300 -acodec aac -ab 192 -s 480x320 $file.mp4
    done
    

    You can copy it and paste it as a new text file. Then make this file executable (chmod +x [filename]) and run it from the commandline, with the source video file as parameter (multiple files can be put as parameters, for multiple conversions, and wildcards are accepted).

    The resulting file will be named the same as the source, with the ".mp4" extension added to it.

    Looking at it, it seems you only need ffmpeg to use it. Maybe some codecs also, but I guess you have them already if you already played with video conversion :)

    If you don't already have it installed, try :

    sudo apt-get install ffmpeg
    

    Hope this helps.

    garbagecollector : @littlejawa I would have used this but; i managed to get handbrake working.

How to solve "Unknown command 'keystatus'" when booting ubuntu 10.04?

Hello, I've installed Ubuntu 10.04 using Wubi. When booting for the first time, everything worked fine. However when I try to boot now I get this error and booting freezes.

Error: Unknown command "keystatus".

If that helps, I'm using HP ProBook 4510s and Windows 7.

From ubuntu Keiji
  • I don't have a specific answer, but if you're able to boot into Windows I would start with a check that all is well with the Windows partition by running chkdsk /r as described on the WubiGuide wiki. You might also want to take a look at the section on repairing using a LiveCD in the same location.

    If you're also not able to boot into Windows, you'll probably have to boot from a Windows CD into the recovery environment and repair the MBR.

    From moberley
  • keystatus is a GRUB2 command which needs the grub module of the same name to be loaded to work. I assume if the command is unknown that module can't be loaded anymore for some reason...

    Did you install any updates between the first & second boot?

    From JanC

How to run cron job when network is up?

I have some anacron jobs which run daily. The scripts update local bzr and git repositories. Naturally this scripts need working network connections. I'm on a laptop and often wired and wireless internet do not come up fast enough. This results in my chron job to time out on pulling the repositories =(

So:

How to make sure the internet is up before running specific cron jobs? Or how to fail a job if there is no network, such that it is retried by anacron later again?

From ubuntu Dima
  • What I do is create a shell script that does what you need, ie. checks for network connection and then fires off the updates. Then call the script from cron.

    From nixternal
  • To expand on nixternal, the fping binary is excellent for that. You can cook it up in one-liners as in

    $ fping -q yoo.mama && echo yes
    $ fping -q www.google.com && echo yes
    yes
    $ 
    

    As you see, yoo.mama does not like me but Google does. In crontab, you'd do something like

    5 5 * * *  root   fping -q google.com && /some/script/I/want --to --run
    
    aperson : That won't run the command later if the network is down. How could that be accomplished?
  • I made a cron that did a ping test on a DNS server to ensure networking. Something like this:

    ping 8.8.8.8 -c 1 -i .2 -t 60 > /dev/null 2>&1
    ONLINE=$?
    
    if [ ONLINE -eq 0 ]; then
        #We're offline
    else
        #We're online
    fi
    

    Recently I've used something like this:

    #!/bin/bash
    
    function check_online
    {
        netcat -z -w 5 8.8.8.8 53 && echo 1 || echo 0
    }
    
    # Initial check to see if we're online
    IS_ONLINE=check_online
    # How many times we should check if we're online - prevents infinite looping
    MAX_CHECKS=5
    # Initial starting value for checks
    CHECKS=0
    
    # Loop while we're not online.
    while [ $IS_ONLINE -eq 0 ];do
        # We're offline. Sleep for a bit, then check again
    
        sleep 10;
        IS_ONLINE=check_online
    
        CHECKS=$[ $CHECKS + 1 ]
        if [ $CHECKS -gt $MAX_CHECKS ]; then
            break
        fi
    done
    
    if [ $IS_ONLINE -eq 0 ]; then
        # We never were able to get online. Kill script.
        exit 1
    fi
    
    # Now we enter our normal code here. The above was just for online checking
    

    This isn't the MOST elegant - I'm not sure how else to check via a simple command or file on the system, but this has worked for me when needed.

  • I think you can use Upstart to help you there. Mind you, I haven't tested that code below works but something very similar should.

    # /etc/init/update-repositories.conf - Update local repos
    #
    
    description     "Update local repos"
    
    # this will run the script section every time network is up
    start on (net-device-up IFACE!=lo)
    
    task
    
    script
        svn up && git fetch
    #   do some other useful stuff
    end script
    

    That pretty much it. You might want to add some code to check that it does not run very often. You might also want to add start update-repositories to your crontab, it'll make sure your update will happen if you are on the net constantly for a prolonged period of time.

    From vava
  • You can talk to NetworkManager to see whether you are connected or not:

    $state = $(dbus-send --system --print-reply \
        --dest=org.freedesktop.NetworkManager \
        /org/freedesktop/NetworkManager \
        org.freedesktop.NetworkManager.state 2>/dev/null \
    | awk '/uint32/{print $2}')
    if [ $state = 3 ]; then
        echo "Connected!"
    else
        echo "Not connected!"
    fi
    

What are the biggest barriers to walking the MOTU/developer path?

For those who are not MOTU (people who maintain the Universe and Multiverse software repositories) and do not have plans of the "I will apply to MOTU by $date" variety:

What keeps you and others like you from trying to become MOTU? What makes you think you couldn't become one?

I'm referring to both social and technological barriers.

EDIT: I'm only saying MOTU because it's a pretty generic group, but "why aren't you packaging / patching and intending to eventually try for upload rights?" is an even more general version.

From ubuntu maco
  • I think the biggest technical barrier is knowing how to create Debian packages. While it is relatively simple to create a working package, it is much harder to create packages up to the standard of Debian and Ubuntu. Also, the guides on how to create packages normally deal with a situation in which you have the source code that requires compiling. This can be confusing for applications written in interpreted languages.

    The biggest social barrier is probably knowing how to get packages uploaded into the universe/multiverse repositories. It is a lot simpler to just create your own ppa and upload packages there.

    From dv3500ea
  • I think there a several reason for this. I also think the the reasons are often individual.

    One of the issues at this time, is the change in the whole MOTU system. I believe, the changes can be confusing, and have been implemented more on technological lines and unfortunately did not bring the community fully on board (maybe just because it is confusing).

    I also think, in some cases the motivation to be a MOTU is not as clear as it could be. IMHO, being a MOTU is a responsibility, not a privilege. It is not about the title, but about the ability to help the Ubuntu community by the access rights that come with it. Due to this, it could be that the whole approval process could be modified (or extended). MOTUs usually nominate themselves, and then the board looks if they are ready to be MOTUs. Maybe it should be possible, that peers that believe that someone is ready to be a MOTU be able to nominate that person. This would IMHO represent more the fact, that the nomination is done to help the process, not to obtain a title. I understand that making this the sole way has its problems too, therefore, I rather see it as an alternative then the only way.

    I also know there have been some problems in the past with people focusing more on KDE. These problems have hopefully been addressed, but maybe it would good if that would also be more widely known.

    Obviously, these are just a couple of issues that I have notice. People are different and will see different things, or be affected differently by the same thing. So, theses issues might not stop everyone, nor are they the sole reasons for this problem.

    maco : Sponsors *should* be telling the people whose packages they sponsor when they think they are ready, "hey, maybe you should apply now," but I don't know how often that happens. I've suggested applying to one person I was mentoring, but he changed his focus to other areas of development.
    txwikinger : It is still a difference when a sponsor tells someone to apply, or this person is nominated by a sponsor.
    lfaraone : Uh? Sponsors don't nominate people, Sponsors advocate self-nominations by the sponsoree.
    maco : lfaraone: txwikinger is suggesting that sponsors should be able to nominate people. It has happened once. Some folks went and created a wiki page for Sarah Hobbs and emailed the TB and gave testimonials and so by that point when there was a *clear* outpouring of support, then she showed up to the IRC meeting to take the last step.
    txwikinger : @Ifaraone: I am suggestion that some good people will not self-nominate themselves and we therefore lose them. In the end, a good person becoming a MOTU is a win for Ubuntu, maybe we should think about this.
    From txwikinger
  • Nowadays people like drive-by contributions.

    20 years ago you would typically focus a lot of your energy on a pet project, if you had one. Today you visit dozens of Internet pages a day, and there are lots of social networks or other communities, where you can contribute to wikis, forums and other stuff. While this has led to more people contributing, it also led to people expecting low barrier entries (a la "just click the website to edit it). Otherwise they may just turn to other communities.

    Therefore you should look for barriers in the MOTU process. I remember the GroundControl project to lower the barrier for patch contributions in launchpad hosted projects. Maybe you need similar new tools, so new MOTU candidates don't have to fiddle with a lot of command line tools. While those current tools may be powerful, it probably takes a lot of energy to learn how to use them correctly.

    maco : I don't know if I like the idea of people who can't use the shell maintaining packages, since shell scripting is an important part of packaging (that is, there are shell scripts that you need to write / modify to make many packages work).
    Bananeweizen : @maco: Do you want to get new contributors or not? If so, you should accept that processes may need to change (and not just the people involved in the processes). Elitist thinking will exclude a large part of the potential community. And if you want to get a distributed effort to get started, the command line is generally a very bad tool to support that.
    maco : That's like saying "you need to know some C to write a kernel patch" is elitist. You just plain need to know how the command line works to write the scripts that go into a package. Even if you had a GUI for making a package, it'd end up with a bunch of "type the the postinst shell script here" textboxes.
    Bananeweizen : My comment was not about technical necessities. I'll try to rephrase (I'm not a native English speaker): First you ask for additional contributors. Afterwards I read in your comment: If you can't write shell scripts, you are to dumb to participate in packaging. That upsets me. I still believe your assumptions are wrong. Until Ground Control everybody had to know version control systems to be able to patch some project in LP. Instead of making version control easier, GC contrated on the single use case of patching and removed the need to know anything about version control systems.
    maco : I didn't say "dumb" anywhere. I said it's a necessary skill. For any somewhat-complex package, you *will* have to write a shell script. Ignorance (having not *yet* learned a certain skill) and intelligence are in no way the same.
  • Provide better documentation.

    I have taken part in the developer weeks IRC sessions related to packaging and MOTU stuff (twice already) and found that during those sessions you typically have a vague understanding of the process. But if you look at the Ubuntu wiki pages two weeks later, you can't get all the pieces together anymore. Those pages often are kind of a bullet point lists from people who already understand the process in detail. But that is not enough to make the content understandable for newbies.

    So maybe you should try to get the documentation wiki pages explain the process, tools and people involved in more detail. Or even with complete examples. During the IRC sessions there are always repeatable examples, maybe those make the difference to the wiki pages.

    maco : I agree the wiki pages are not very helpful. I found Daniel Holbach's videos on YouTube most helpful when I was starting out. Are logs of the IRC sessions posted to the wiki?
  • The biggest barrier I've found is the Ubuntu developer page: http://www.ubuntu.com/community/get-involved/developers

    So many times, I've gotten enthusiastically determined to contribute at least 1 patch to Ubuntu... so I go to the natural place on the website... and end up lost in a sea of documentation. Hours later, I still have no idea what I should write a patch for. When I look through Ubuntu bugs, I often find patches... many that just sit there unused.

    As far as packages go, I've tried to figure out how to make them, it's really confusing. I also tried to get involved in Launch Pad, but the interface is so much more complex than Source Forge, I couldn't get my own code on LP. It's very difficult for a new user.

    Owais Lone : Yes, launchpad design has a problem. Things are not obvious on LP. It's easy but you have to look for it a lot. New users quickly get lost. It needs a redesign to make it more obvious and simple like GitHub.
    From Greg
  • Being a MOTU is a responsibility.

    Well, obviously the #1 reason is not technically knowledgeable enough, and the #2 reason is having a squillion things you'd rather do. But amongst your target audience, I think the main reason is that it's a responsibility.

    If I compile a package for myself, no one else cares about whether I've followed the technical and legal policies. No one will come to me expecting that I package a newer version. No one will ask me to fix bugs.

    If I upload my package to a ppa, a few people may care. But the expectations aren't as high. I can just vanish and let people complain on their blog how sad it is that the package isn't available for natty narwhal.

    If I become a MOTU, suddenly I have a big responsibility. Users will come to me with bug reports and complain if I don't solve them yesterday. Users will expect that I upload the new version of the package as soon as it's available upstream. I'll have to explain to nontechnical users how to figure out what they did wrong. Unlike posting on a forum, I'm not supposed to ignore the questions I don't feel like answering. And other developers might go after me because I messed up something — this can be intimidating.

    And what do I gain?

    • A fuzzy feeling that I've helped people. That can matter. But if that's my main motivation, how can packaging software compare with helping at a soup kitchen or tutoring your out-of-work-immigrant neighbor's kids?

    • A bullet point on my resume? Meh, participating in a FOSS as a programmer will be much more appreciated. (It gives you experience with things like project management and long-term maitenance that are hard to teach in college courses.) In fact, being a DD/MOTU looks suspicious to the many employers who frown on politically-involved employees (you're openly giving political support to FOSS).

    • A feeling of satisfaction? Much less than writing my own program from scratch would. Programming is a lot more creative than packaging. There's a big sense of achievement in it. There's bragging rights. But in packaging? It's a chore. It's not glamorous.

    (That's a third-person “I” above. I think the reasons I give apply to most people but to varying extents. Personally it's mainly having a squillion things I'd rather do, and packaging lacking a sense of creative achievement.)

    (Out of curiosity, does Ubuntu lack manpower?)

    maco : Yes, it does. Have you seen our bugtracker?
    Gilles : @maco: On the [MOTU page](https://wiki.ubuntu.com/MOTU), I see easily what a MOTU is and how I could become one. I don't see anything about “Uncle Ubuntu needs YOU!”. I don't think the bugtracker tells much to a casual user; for example lots of unclosed bugs could mean lots of report-and-run users who don't post enough information to reproduce the bug.
    Javier Rivera : I must totally agree with Gilles. If I had more time to devote to open source I have a couple of projects that I'd love to program.
    maco : There are a lot of bug like that, but they get closed due to inactivity eventually. There are ~2000 bugs with patches attached on Launchpad. Operation Cleansweep has been about going through and reviewing the patches and sending them upstream if good and rejecting if bad. If they're good and shouldn't wait a whole release cycle to get through upstream releases, they need to be packaged up. Though many are years old. We haven't kept up with the rate they are submitted.
    From Gilles
  • What stop me to become a MOTU?

    Eventhough Ubuntu is a very nice Community (I've not been flamed for n00bie questions, yet) I think that there is few / incomplete documentation about the packaging process (even Debian's New Maintainer Guide is full of "this topic is out of the scope of this document" lines). If you take that fact and think about people who's first language is not english (like me) the process is even more difficult and caothic.

    With a simple, right to the point, documentation every thing would be easier all of us, but the people who has the technical skils to write that documentation are too busy to do it.

    From alucardni
  • I posted a few ideas here: http://blog.mitechie.com/2010/08/24/ubuntu-help-wanted/

    One thing I really want to bring out is, I wonder how many developers don't use build systems that easily plug into the packaging tools. I'm doing python development. My world centers around setuptools and distribute, and yes, I can take something I build with those and export it over, but to what end? I already have something that's distributable. I wonder if the rise of scripting languages with their own build tools/distribution methods cause a lack of experience and desire in getting things put together with debian packaging tools and thus MOTU levels.

    From Rick
  • For me it is probably time related. Currently I do not have a lot of time to invest. And I started of with bug triaging, but soon found out that things were a bit more complicated. And you really need to sink your teeth in it.

    Then there is bug fixing, which I know I would enjoy. What is keeping me from helping out there, is that you need to run a development branch or something. I once started to work on a papercut of mine in the System Monitor (https://bugzilla.gnome.org/show_bug.cgi?id=611738) So I started off using Ground Control, to fetch the required source and get in there an fix the bug. However, it turned out not to be so easy, because of dependencies. I know that I should only work on the development version, and test if it is fixed there. However, just to try that I needed to download the source of many other gnome packages. Which is not that easy with groundcontrol. And you probably should do that on a work machine. So I stopped there. (Again it would take me too much time, just to get started for this)

    Concernign packaging, I am just not aware of anything that needs packaging. I have once done a tutorial on packaging, and found it not too difficult for small applications. However never went out looking for a list of stuff that needs packaging, because I know there probably is one... :)

    So basically for me it is just time, I want to help out, but I just have a couple of hours (2 or something) every odd week or so. And in that small amount of time I seem to be unable to get started with this.

    maco : You don't need the source of the dependencies, just the regular debs. Why not setup a VM of the development release to work in? Then you don't have to muck with your setup (though, I've been running devel releases almost continuously since Feb 2007...more than a year before I started doing anything related to packaging / fixing Ubuntu bugs). Fixing a bug a week on 2 hours is definitely possible once you have your environment setup. As to a list of things to package: there's a needs-packaging tag on Launchpad. Packaging up existing patches is *very* useful too!
    From balachmar
  • When I create a package, it's usually to scratch an itch of mine, not because someone else wants the package. Checkinstall is good enough to make a package for me, and then my itch is scratched, and I have no personal incentive to go the extra distance to package it manually, and figure out all the dependencies and stuff.

    So I guess that even if packaging for distribution is easy, it's still a lot more work beyond packaging for yourself.

Clock stops ticking when inactive, causing drift

The clock applet drifts in time. Clock is set to "synchronize with internet ..." so it is correct at startup, but then if I stay inactive for some time, may be 5 min as well as 1 hour, the clock stops ticking. If I start to be active again, then the clock applet moves again, but the time is now late.

And it is not only the applet that is wrong, but the whole system date, because when I run date in a terminal, the time is also wrong.

Clarification : Sorry, may be my question was not clear. Here is my bug report to ubuntu :

Expected Behaviour :
Clock-applet displays the correct time,

Observed Behaviour :
Displayed time is drifting

How to reproduce :
If I get away from my computer for some times, the time displayed by the clock applet drifts. But the date command also show the wrong time. Moreover, sleep interval also get wrong. To debug this, I tested the following script :

#!/bin/bash
while [[ true ]]
do
    date >> clocktest.log
    hwclock >> clocktest.log
    sleep 300
done

Must be run as root because of hwclock. Il launched it :

./clocktest.sh &

and got away from my computer

Here is the output log :

1 mardi 17 août 2010, 12:42:12 (UTC+0200)
2 mar. 17 août 2010 12:42:13 CEST -0.346882 secondes
3 mardi 17 août 2010, 12:47:13 (UTC+0200)
4 mar. 17 août 2010 12:57:13 CEST -0.080965 secondes
5 mardi 17 août 2010, 12:52:13 (UTC+0200)
6 mar. 17 août 2010 13:02:14 CEST -1.002776 secondes
7 mardi 17 août 2010, 12:57:18 (UTC+0200)
8 mar. 17 août 2010 13:07:18 CEST -0.063633 secondes
9 mardi 17 août 2010, 13:02:18 (UTC+0200)
10 mar. 17 août 2010 13:12:19 CEST -0.361501 secondes
11 mardi 17 août 2010, 13:07:19 (UTC+0200)
12 mar. 17 août 2010 13:17:20 CEST -0.987434 secondes

Line 1 and 2 show the first time through the loop.
Line 3 and 4 show the bug : while date (and sleep) thinks 5 minutes have elapsed, hwclock shows that 15 minutes have elapsed.

Line 5 to 12 shows normal behaviour, except now date is late by 10 minutes. Behaviour is normal because I was back at my desk using the computer.

Having clock applet displaying the wrong time is one thing, but having the whole system time wrong (since sleep gets confused too) is a major bug.

Hardware : It is a fujitsu siemens amilo xi 2550 notebook. It was working fine with ubuntu 8.04

  • Your CMOS battery seems to be dying. Open the computer, and there's a little thing that looks like a large watch-battery on the motherboard. Replace that.

    garbagecollector : @maco its a battery its either dead or alive. maybe but its a long shot that this is the problem.
    msw : It is possible for a battery to be marginal, but the OP claim "so it is correct at startup" is (as yet) unsubstantiated and more information is needed to see if the "CMOS" RTC is bad or the soft system clock is weird. The distinction between the two is explained at http://tldp.org/HOWTO/Clock-2.html
    maco : I've had watch batteries drift when they were not-quite-dead. And I read correct at startup because of NTP to mean...well, that NTP was what's making it correct (ie, it's doing a sync at boot)
    shodanex : This has nothing to do with a dying cmos clock. cmos clock seems to be just fine
    From maco
  • Are you perhaps running Ubuntu inside a virtual machine? If so, which brand?

    shodanex : no, see question for more info on the hardware
  • Hello shodanex, I'm experiencing the very same problem since last week.

    My Lucid (2.6.32-24-generic #41-Ubuntu SMP Thu Aug 19 01:12:52 UTC 2010 i686) runs on a fanless VIA desktop PC with OS's screensaver and power savings turned off completely except frequency scaling being set to "on demand". No VM is used.

    I used your script (extended by a line that also logs /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq ) to observe that the system time does not DRIFT (in relation to the CPU's clock maybe) but really STOPS approx. 8 minutes after I "left the desk". It continues to run at normal speed after I "returned" and keeps a fix offset then.

    For me, a high load (transcoding a video over night) did not keep the clock alive, I got an offset of 10 hours this way!

    Independent from the OS (it must be some BIOS setting), my screen goes blank after 15 minutes of input inactivity, but it goes instant on when I touch the mouse, and the applications are absolutely responsive (so no suspend to anywhere / hibernate).

    The gnome panel just reflects the system time, so the problem is in a deeper layer. I guess the new kernel just can't maintain the time correctly on hardware that does some unexpected power saving (laptops, fanless devices). Though my post does not provide a solution, you maybe don't feel so alone now :) Keep trying! Cellaz

    From cellaz

Laptop fan not turning on when needed

It looks like my fan in my laptop is not turning on when appropriate. I already removed granola (maybe it disables it far too long?)

When I put the computer in sleep mode and resume it immediately turns on the fan (the cpu was waaaaay overheated). So the question is: why does it not turn on when needed.

There are multiple issues here: Why, and how to diagnose this issue? How can I control when the fan should turn on, how do I test that the code is turning on my fan, how do I turn on the fan manually through a command if all hell breaks lose?

  • It appears it may have been granola. After removing it it seems the fan does turn on, as I hear it working at low speeds at the moment.

    Will experiment more, though I can still use help finding out how to diagnose the problem anyways just incase it was not granola and instead a random issue.

    Dmitriy Likhten : Have not had problems since the removal of Granola. Will contact them to try to resolve this, or diagnose it.

How do I remove Ubuntu?

My friend installed Ubuntu on a separate partition on a PC with Windows 7 using Wubi. But by mistake he reformatted the drive containing Ubuntu. He is still getting Ubuntu option in the boot menu. How can it be completely removed?

From ubuntu Shubh
  • You will need to uninstall Ubuntu from Windows 7 - you can do this in the Add/Remove software section of the control panel or by running Wubi installer again (It should inform you that you need to uninstall first).

  • You can also use EasyBCD to remove the boot option, but you should try uninstalling first. http://neosmart.net/dl.php?id=1

    From AJenbo
  • I don't think its Ubuntu problem.

    (assuming that you are using windows)

    You have to edit your boot.ini file

    Run->msconfig->BOOT.INI->check all boot drives

    If it does not find any OS mentioned in boot.ini file, it will shows you an error message saying that your path is invalid and asks if you want to remove it. Just conform it and you are done.

    Works in windows XP. I hope its same in windows 7 too.

    From Rojan
  • Try this tutorial: Easily Set Default OS in a Windows 7 Dual-boot Setup

    In the last step change the dropdown to Windows 7 (there should also be a Ubuntu/Wubi option). You can also change the "Time to display list of operating systems" to '0', or something very short.

    Windows 7 image of how to change defauly operating system

    Alternatively you can try the Ubuntu Wiki instructions for manual removal of Wubi.

    From fluteflute
  • The question is if it is Grub or the Windows boot menu that your friend sees?

    Grub in the MBR requires rewriting the MBR which isn't easy from inside modern versions of Windows.

    THe Windows boot menu can be fixed in the System control panel.

  • If Wubi does not appear in Add / Remove programs, you should be able to download it again, run it, and it will jump straight into the uninstall process.

    If that does not work, the following manual steps will get rid of it, save removing registry keys:

    1. Remove the Wubi directory, if present.
    2. Remove any wubildr files in your C: drive.
    3. Use bcdedit /delete to remove the Wubi entry from the Windows bootloader.
    From Evan

my webcam wont work - how do I debug?

My HP 6930 laptop has a built in webcam. This used to work just fine when I had Windows installed.

Now that I am on Ubuntu, the webcam does not work.

I installed Cheese, but this is what I get:

alt text

How do I go about debugging this?

  • Install cheese (webcam software) by searching for 'cheese' in Ubuntu Software Centre.

    Then run cheese in a terminal (Applications -> Accessories -> Terminal) using this command:

    cheese
    

    Have a look at any errors that are printed in the terminal. You can copy these by selecting the text and pressing ctrl-shift-c.

    These should give you a clue to what the problem is.

    You could use the same process for any other software that you are using that uses the webcam.

    Raj More : I tried running in terminal - no errors, no messages.
    From dv3500ea
  • "No device found" sounds like no driver is found/loaded for the camera...

    1. Some laptops have a switch/key to enable/disable the webcam; make sure it's on. ;-)
    2. Does your webcam show up in the output of the lsusb command? If it does, can you provide us with the line that describes the webcam? (If you're not sure which line it is, feel free to add the whole output of lsusb to your original post.)
    From JanC
  • Do u have /dev/video file? This is how webcams are seen. Also, try launching gstreamer-properties and inspecting the 'video' tab

  • The camera on that model seams to be a Chicony, running the following should give you one line with the camera model:

    lsusb | grep 04f2

    You can then look up your model (search for the number after the red text) on this site: http://www.ideasonboard.org/uvc/

    From AJenbo

How do I changed the language used by the Gnome spell checker?

I'm a Canadian, but often get American spelling suggestions from the spell checker in Ubuntu. How do I switch to a Canadian dictionary?

From ubuntu Mark B
  • The spell checker is based on your Locale - if you switch it to en_CA (or en_GB if en_CA is not available) you should be more Canadian like spellings.

    Get a list of installed languages with the following:

    locale -a

    You can view your current selected locale with:

    locale

    Once you've selected the one which best suites you - you can update it here:

    sudoedit /etc/default/locale

    Be aware if an entry from locale -a ends with .utf8 it needs to be entered as .UTF-8 in the default locale.

    After you make those changes you'll need to reboot for them to apply.

  • Marco Ceppi is right about this being based on the locale, but the easiest way to change this is through System -> Administration -> Language Support in the graphical user-interface (it will also make sure all the right packages are installed, etc.).

    From JanC

How to set up apache with fastcgi and a simple test script?

It's been a few days that I'm trying to set up fastcgi with apache on a Kubuntu server. Despite searching everywhere, I cannot make it to work. If I try to run the site with the cgi application, apache hangs and after the timeout returns a 500 error.

Here is what I did:

  • I made sure that mod_fastcgi is installed and enabled:

    # pwd
    /etc/apache2/mods-enabled
    # ls -l f*
    lrwxrwxrwx 1 root root 30 2010-07-22 10:01 fastcgi.conf -> ../mods-available/fastcgi.conf
    lrwxrwxrwx 1 root root 30 2010-07-22 10:01 fastcgi.load -> ../mods-available/fastcgi.load
    
  • As far as I am aware, fastcgi.conf is properly configured:

    <IfModule mod_fastcgi.c>
      AddHandler fastcgi-script .fcgi
      #FastCgiWrapper /usr/lib/apache2/suexec
       FastCgiIpcDir /var/lib/apache2/fastcgi
    </IfModule>
    
  • I am using this very simple sample script to test the set up:

    #include <iostream>
    using namespace std;
    int main()
    {
            cout<<"Content-type: text/plain"<<endl<<endl;
            cout<<"Hello World!"<<endl;
             return 0;
    }
    
  • I compiled it. It works fine from the command line.
  • I placed it within a folder visible from the web server: http://127.0.0.1/fcgitest/run.fcgi
  • At first I get: "Forbidden. You don't have permission to access /fcgitest/run.fcgi on this server.".
  • I add a .htaccess file in the folder:

    Options +ExecCGI -Indexes
    
  • And now, when I try to access the script address from my web browser, I get the symptom I described at the beggining: the browser first hangs, and after the timeout, I get a 500 Internal Server Error.
  • The apache error.log say:

    Content-type: text/plain
    Hello World!
    [Sat Aug 28 09:08:23 2010] [warn] FastCGI: (dynamic) server 
    "/var/www/fcgitest/run.fcgi" (pid 27758) terminated by calling exit with status '0'
    

It seems the output is written to the error logs!! Is there a missing socket configuration, somewhere??

  • As noted by joschi, CGI != FastCGI . A CGI script would fail in this context.

    http://127.0.0.1/doc/libapache2-mod-fastcgi/mod_fastcgi.html
    http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html

    FastCGI Specification Compliance

    The FastCGI specification is not implemented in its entirety and I've deviated a bit as well resulting in some Apache specific features.

    The file descriptors for stdout and stderr are left open. This is prohibited by the specification. I can't see any reason to require that they be closed, and leaving them open prevents FastCGI applications which were not completely ported to FastCGI from failing miserably. This does not mean the applications shouldn't be fixed such that this doesn't occur, but is invaluable when using a 3rd party library (without source code) which expects to be able to write to stderr. Anything written to stdout or stderr in this manner will be directed to the main server log.

    From augustin

How do I put Ubuntu on a NON-flash external USB hard-drive?

Amazingly enough, circumstances have left me without a single USB flash drive, or a working CD-R drive. Also, because I moved about six months ago, I got rid of all my extra Ubuntu CDs that I used to get by mail. (cleanup win, hindsight fail) And yet, I need to get a live Ubuntu going to boot up a wonky desktop computer.

I tried using unetbootin to put a live CD install onto a portable USB hard-drive, but it won't boot from it (NTLDR missing error).

Is this because the disk is NTFS (which it is)? or for some other reason? Is there a difference between booting from a portable USB thumbdrive and a portable USB hard drive, other than potential performance?

From ubuntu Jono
  • You can use $ sudo aptitude install ubiquity

    Install that on your machine (that's the-ubuntu-installer), then go through installation instruction make sure you set up partions on the usb-drive & at the last step select advanced and make sure the bootloader is installed on to usb-drive as well.

    This will give you portable-ish Ubuntu on usb-drive. (ext4 etc.)

    Currently the only tool that works for installing Ubuntu on NTFS is wubi but that's to be installed along with Windows.

    Jono : I like that, in general. Doesn't really apply in this situation, since it's overkill. BUuuut, what makes it portableISH? Isn't it a full-on install that I can just take around with me?
    Dima : @Jono: unlike live-session it doesn't rescan and doesn't reconfigure X.org each time you start it. So you might experience weird graphics issues on older computers.
    Jono : Oh. Well, in that case I'll stick to pendrivelinux.com for my portable persistent needs.
    From Dima
  • If the computer is attached to the Internet, then use Wubi.

    http://wubi-installer.org/

  • There's no difference between a flash drive and a usb hard drive. Both can be used as a boot medium, and in the same way.

    If you want to put the live system (installer) on the disk, the partition needs to be FAT32. NTFS cannot be read at this stage. So the partition you boot from (where the live cd contents are put) needs to be formatted as FAT32.

    You can also install Ubuntu to the external hard drive, of course, just as you could with a flash drive. That's a different operation from using the drive as a live cd boot medium.

    JanC : Actually, GRUB2 can read & boot from NTFS (I would have to check to be sure about legacy GRUB; I'm sure it can boot from it but maybe not read, and in that case it requires manual fiddling to specify the sectors to read). Or is this an Unetbootin limitation?
    loevborg : I thought it was a limitation of the Live CD system.
    From loevborg
  • Maybe somebody from the Israeli locoteam is nearby and can help you out with a live-CD or USB or such? Try the chat, forums, etc.

    From JanC

Higher screen resolution in VirtualBox?

I've just installed Ubuntu 10.04 into VirtualBox on Windows 7.

Unfortunately the only options showing for screen resolution are 640x480 and 800x600 and the monitor is showing as 'Unknown'.

How would I go about upping the resolution to 1280x1024 (I'm on a 1600x1200 monitor)?

Update
I tried mounting the VirtualBox 'Guest Additions' ISO (from the VBox 'Devices' menu) and doing sudo sh ./VBoxLinuxAdditions-x86.run from the mounted drive, which gave 2 new listed resolutions after a reboot (1024x768 and the 16:9 version of that resolution). These worked when I selected them but disappeared when I switched back to another resolution. I tried rebooting and running VBoxLinuxAdditions-x86.run again but onlu the 2 low res options listed this time.
I think I'm going to reinstall...

Seems to be a VBox problem rather than an Ubuntu problem as after reinstalling 10.4 overwriting the original virtual partition, sudo sh ./VBoxLinuxAdditions-x86.run now has no affect at all.

From ubuntu pelms
  • You need to install the VBox guest utilities to add support for the virtualised graphics hardware.

    sudo apt-get install virtualbox-ose-guest-utils virtualbox-ose-guest-x11 virtualbox-ose-guest-dkms
    
    pelms : No luck with that I'm afraid. I still only have the 2 low res options in Monitor Preferences :¬(
    maco : even after reboot?
    pelms : Yep. Even tried this straight after a fresh install of 10.4 (after the updates)
    From maco
  • info from lifehacker,

    here is the link http://lifehacker.com/5583650/run-mac-os-x-in-virtualbox-on-windows

    see the comments

    For those who want a custom resolution size:

    First off the res problem is because of VB not the (guest) OS . To fix

    1) open up your (host) OS’s terminal (or cmd in WIndows).

    2) Next navigate to the VB(virtualbox) Folder (ex Windows) Drive(C?):\Program Files\Oracle\VirtualBox. Once there type in the commands;

    VBoxManage setextradata global GUI/MaxGuestResolution any 
    

    (this removes any restrictions in place for the res)

    VBoxManage setextradata "VM name" "CustomVideoMode1″ "Widthxheigthxdepth" 
    

    (VM Name = your VM Name, Widthxheightxdepth would like 1600×900x32 as example aslso that command is all one line not two)

    Your new res should show up. If it doesn’t just try again.

    EDIT:I found other users saying this method works and other methods are suggested like ignoring xorg.conf you should try that (it's the second link) if this method didn't work, i will link them here

    http://ubuntuforums.org/showthread.php?t=634140

    http://ubuntuforums.org/showthread.php?p=5145028#post5145028

    According to the above post the xorg.conf should be editted using

    $ sudo gedit /etc/X11/xorg.conf
    

    and it should contain (after editting)

    Section "InputDevice"
    Identifier "VBoxMouse"
    Driver "vboxmouse"
    Option "CorePointer"
    EndSection
    
    Bananeweizen : Even if this answer might factually be correct (which I don't know), it is hardly readable and therfore will not help that much. Consider using formatting, links and so on.
    pelms : Thanks Siamore. At what point do you press 'F8'? It doesn't do anything once Ubuntu has launched.
    pelms : Why keep marking the guy down..? How many points do you need to be able to edit the post?
    Siamore : @pelms no need to press F8 it should work automatically
    Siamore : @Bananeweizen took your advice i guess tried to say too much and it was my first answer BTW i have -2 but still have 7 rep? how's that?
    From Siamore
  • What driver is specified in ur xorg.conf? AFAIK, after installing guest additions the 'vboxvideo' should be used:

    Section "Device"
        Identifier   "Configured Video Device"
        Driver     "vboxvideo"
    EndSection
    
  • I can tell you how I do this with Mac OS X as the host system. Maybe it will work on Windows too.

    • I start ubuntu in VirtualBox
    • I open up the terminal on Mac OS X
    • and execute "VBoxManage controlvm [name] setvideomodehint 1280 1024 24" (replace [name] with the name of your ubuntu vm)
    From
  • Once the Vbox Additions has been instaled (and reboot the guest os), press Host + H, then maximise the window, thats sould do the trick..
    If not, maybe you are using an old version of vbox (therefore, the Vbox Additions might has an incompatibility..)

    From Axel