Wednesday, February 9, 2011

Programming on the Asus EEE Pc in Visual Studio

Has anybody tried programming on the EEE Pc in Visual Studio?

I'm considering buying one so I can show some apps on the fly, but also make small changes to them if necessary, without the inconvenience of a large laptop.

Some key points I'm after:

  • How fast it is
  • Would it suit the needs of a developer making small changes to code?

It sounds like the specs would get completely owned, but I've heard/seen strangely good things about the EEE Pc, like how it launches Word 2007 super quick on a nLite'd XP install. :)

  • I think the 700 series would just be a dog. The 900 series would be a far better choice with a bigger screen and faster RAM (but the same processor), but it's still not well-intended for Visual Studio 2008. I find VS cramped on my 12" tablet.

    Take a look at the Dell Inspiron Mini.

    From Robert S.
  • I would recommend something other than the Asus EEE, they're too small of a "netbook" and the screen resolution is terrible.

    The HP Mini Note has a nice 8.9" display, practically full size keyboard and best of all has a display that can do 1280 x 768, though you might need to bump your font sizes a bit. :)

    You also have the option of the Acer Aspire One which appears to be a much better netbook with a low price point.

    If you Google any other those netbooks you will find many reviews and if you hit up YouTube you can find lots of hands on video reviews.

    Ash : My eeePC 900 has a 8.9 inch screen with a resolution of 1024 * 600. Hardly terrible. Perhaps you are referring to the eeePC 700, it's 7 inch screen and 800 * 480 res. I agree is not ideal when running Windows XP.
    Oddmund : I agree with Ash. Linux fits the eees better than windows.
    From mwilliams
  • I think it largely depends on the size of your project. A small project might not have too much trouble. But a large project would probably bring the thing to it's knees. I've seen my work project on VS.Net 2008 eat up to 350 MB pf RAM all by itself. Not counting loading the OS and actually running the project. Also, you might be using up a lot of hard disk space by installing visual studio on it. There isn't a lot of space on the EEE, unless you plan on using some kind of external USB hard disk.

    Personally, I would recommend a more real laptop. You could get something cheap and small, and you'd probably be a lot happier in the end.

    From Kibbee
  • I own an eeepc 900 and have successfully installed Visual Studio 2008, the MSDN library and SQL Server 2005 developer edition.

    The biggest issue was fitting it all in the 4GB solid state C Drive. In short, you can't. Therefore using the 16GB secondary internal flash drive is essential.

    The utility nlite was all I needed to do this. In summary nlite lets you create a more compact version of windows XP with just the components you need. However most important for the eeePC was it allowed me to easily tell windows to use D:\ instead of C:\ as the destination for "Program Files" and "Document and Settings".

    Then you re-install windows from the nlite windows image, with the required paths automatically set as required. (I strongly recommend this approach over trying to change the paths of an existing/running windows install due to numerous issues it may cause for application compatibility etc).

    Unfortunately (on the eepc900 at least) the D:\ drive is slower in general use then the solid state flash drive. For Visual Studio this means the startup time can be slower than ideal (ie 30 seconds). But I have 2GB of RAM and have completely disabled the windows swap file, so once the data has been loaded into RAM, Visual Studio runs nicely.

    Overall I use Visual Studio on my eeePC for smaller projects and it is ideal for creating proof of concept type apps while on the move. While it is never going to be ideal as a main development machine, I can completely recommend installing Visual Studio etc on it.

    To help resolve possible confusion:

    The eeePC 9 series (900, 901) have an 8.9 inch screen, resolution 1024 * 600 and a total of 20GB internal storage, RAM can be upgraded to 2GB.
    The older eeePC 7 series have 7 inch screens with 800* 480 resolution and a total of 4GB built internal storage (RAM up to 2GB?). As a development machine, the 7 series are not really up to the job, however the 9 series certainly are.

    [Update]

    I now own an eeePC 900HA, 1.6Ghz Atom, 2G RAM, 160GB hard drive. Great little machine for proof of concepts and smaller programs. The biggest performance improvement is in the standard 160GB HDD, much better then a pretend solid state drive, much cheaper then an equivalent real SSD.

    Kit Roed : Just confirming that the 7 series can indeed have one 2GB memory module installed as well.
    From Ash
  • http://www.hardforum.com/showthread.php?t=1303682

    It seems that other people have tried it, all have complained about the screen resolution, but surprisingly not the CPU. Needless to say, I didn't want to have all the panels open / want to use it primarily for a development machine, I just wanted the option to do so if possible.

    I'm looking at a 700 series, if it works it's a bonus, if it doesn't, I'll just have to look into using SharpDevelop maybe (I'm a student without much money, so it really needs to be budget.

    Ash : @RodgerB, as a machine that will let you try things out, make quick code changes etc, the eeePC900, 901 are ideal. Where I live (Australia) 900s are becoming as cheap as 700s, if you can, I'd definitely recommend a 900 if at all possible.
    From RodgerB
  • That'd cure my addiction to computers.

    From Vasil
  • More or less like Ash, I have an EEE PC 901, installed with VS2008 without SP1, Resharper and MSDN library. I didn't install SQL Server as I use MySQL most of the time. I install all my "important" tools, which is VS2008 on C:, the rest of the stuffs in D:, as I prefer to have maximum performance for my VS2008. Like the others had mentioned, screen size is quite a limiting factor, so I use ProFont at 8, shrinked the default window's UI, not forgetting to turn off the theme too.

    Performance wise, CPU is doing ok, but the SSD read/write speed is a factor. I benchmarked and get around 30MB/s read, slightly more than 10MB/s write. When I try to load multiple apps, or when VS2008 is busy with something, it will take a much longer time to even load notepad, so I kinda practice to be patient and load 1 thing at a time (on my desktop, I can never wait to load everything in 1 shot). I had 2GB of RAM, had been trying to allocate more ram for disk cache, but still haven't achieve anything.

    I used it to do onsite troubleshooting and minor touch up, or whenever I go outstation, plus watching my favourite CSI when I'm traveling :P. Anyway, the main reason I got this is because of it's battery runtime, 7 hours. I doubt you can find another decent notebook that can match it. It produce so little heat so it can play nice on my laps and the standby also quite seamless. I use the standby extensively and even leave it on standby for days. Battery only drop about 10% per day. I can be seated and working on my program and next minute close my notebook and move to the next location without worrying that it won't go into standby(even if it doesn't standby, it can still last until the next time it's opened up, and not burning the pouch along the way)

    I did look into Acer AspireOne before I got the EEE PC, AspireOne did have a wider keyboard, much easier to type, but the touchpad and battery puts my off. I had been considering various 12" notebook too before decided on EEE PC, as I used to have a 12" for 4 years. But 12 incher doesn't have that much juice for me to work for more than 2 hours. Those that can run for 4 hours is just too pricey.

    There's one time when I came into my client's office earlier then usual, in the morning at 9, started working on my notebook, left it on standby when I go for lunch, then worked until 5 in the evening, when everyone left, I still had 20% left on my battery. Knowing this, I can even leave the power adaptor in the hotel and just go around with a pouch. Way to go ASUS

    EDIT: Sorry for the mis-information guys, I didn't realized that I only had VS2008 without SP1 on my Eee PC. Didn't realized the "difficulty" until Menelmacar as me about it.

    From faulty
  • I am just trying to install SP1 and it seems that I will not be successfull. So you think that pointing Program Files to the D drive will force the installer to use drive D: for service pack installation? Currently, I have 1 GB free on drive C but the installation needs 1,9 although Visual Studio is installed to the D drive. You can see details about the installation here: http://blogs.msdn.com/heaths/archive/2008/07/24/why-windows-installer-may-require-so-much-disk-space.aspx .

    From gius
  • Wow, I've just installed .net 3.5 and the disk requirements dropped to 1090MB. Hopefully, I will be able to install SP1 without the reinstalling-and-changing-programfiles-path gymnastics.

    From gius
  • Well, it works! So - if you are short of disk space (you need 1,9GB) while applying VS2008 SP1, try installing .net 3.5 first. I would also recommend to install it from the ISO package (ie., you don't need to download the installer files).

    I was really surprised about the performance - I compiled a web site with five DLL projects and also started the SQL and developer web server and it was really good.

    From gius
  • @faulty: How did you manage to install the Visual Studio 2008 SP1? I installed the Studio on drive d: (the one with 8GB). When I try to install the SP1, the installer always tells me there is not enough space on drive c: (required: 4.6 GB, available: 920 MB). I already tried to set the TMP and TEMP environment variables pointing to a drive on a SD card with enough memory, and I can see that the SP1 installer does use this directory for temp files. But the installer insists there is not enough space on c:, and cancels installation.

    Is there any other environment variable or setting to change in order to persuade the installer to do its work?

    faulty : Take a look at my correction. Sorry about that. I think gius and Mark k method is plausible. Do give it a try. Next time, you can actually comment on my post, so I can see in my profile. This reply came un notice until I got a email about it
  • I managed to install Visual Web Developer installed on the XP that came with my eee pc 901 and i've still got 1.3Gb left on the C Drive.

    1. First I got the required 1.4gb free on the C drive that VWD needs to do the install, I did this by following the instructions here... http://forum.eeeuser.com/viewtopic.php?id=40356 (the 'creating junctions' step for the windows installer/microsoft.net directories saves a lot of space)
    2. I downloaded the "Offline" Visual Studio ISO from (available at the bottom of the download page) here... http://www.microsoft.com/express/download/
    3. I then installed VWD from this ISO, remember to choose a install location other than C drive!

    Once the install complete it turns out only about 200mb-300mb is actually used on the C drive.

    From Mark
  • Just a thought or alternative suggestion that might be applicable...

    I regularly use Visual Studio without any issues on my eeePC. The trick is that I simply access another machine running Visual Studio remotely in order to do this. This lets me have the convenience and portability of the netbook, along with the full-scale computing power of a real development environment.

    Obviously this won't work if you don't have connectivity, but for me its an ideal setup..

What are Java command line options to set to allow JVM to be remotely debugged ?

I know there's some JAVA_OPTS to set to remotely debug a Java program.

What are them and what does they mean ?

  • I have this article bookmarked on setting this up with Eclipse.

    Basically run it with:

    -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044
    
  • Here's some more gory details on what the options are:

    http://java.sun.com/javase/6/docs/technotes/guides/jpda/conninv.html

  • Ok, by the way, -Xdebug and -Xrunjdwp arguments are to use for Java prior 5.0.

    For Java 5.0 and after, it's preferable to use the -agentlib:jdwp single option.

    So to summarize :

    • before Java 5.0, use -Xdebug and -Xrunjdwp arguments
    • from Java 5.0, use agentlib:jdwp

    Options on -Xrunjdwp or agentlib:jdwp arguments are :

    • transport=dt_socket : means the way used to connect to JVM (socket is a good choice, it can be used to debug a distant computer)
    • address=8000 : TCP/IP port exposed, to connect from the debugger,
    • suspend=y : if 'y', tell the JVM to wait until debugger is attached to begin execution, otherwise (if 'n'), starts execution right away.
    From paulgreg

Example of using Service Exists MSBuild task in Microsoft.Sdc.Tasks?

I'm trying to use the Microsoft.Sdc.Tasks.ServiceProcess.Exists to check whether or not a service exists. There is no example of using it in the documentation though. Anyone have one?

  • I've not actually used this in production myself, and I'm not sure which version you have (I have a copy of Release 2.1.3155.0) and according to the accompanying .chm help file the Task has the following properties:

    • DoesExist Returns TRUE if the service specified exists
    • IsDisabled Returns TRUE if the service is disabled
    • ServiceName The short name that identifies the service to the system.

    The ServiceName needs to be set to "The short name that identifies the service to the system, e.g. 'W3SVC'".

    You might want to give it a try with a well known service (e.g. mssqlserver) and check the result of the other two properties (DoesExist/IsDisabled).

    Update: Here's a sample (works):

    Import the tasks, then call (e.g.)

    < Microsoft.Sdc.Tasks.ServiceProcess.Exists ServiceName="Server"> < Output TaskParameter="DoesExist" PropertyName="Exists" /> < /Microsoft.Sdc.Tasks.ServiceProcess.Exists >

    < Message Text="Service exists? $(Exists)" />

    From RobS
  • This is how we check if service exists, stop it if so, do something, and start service again (if there was one and it was started).

    Helper target:

    <target name="service_exists">
     <script language="C#">
      <references>
       <include name="System.ServiceProcess.dll" />
      </references>
      <code><![CDATA[
       public static void ScriptMain(Project project) {
        String serviceName = project.Properties["service.name"];
        project.Properties["service.exists"] = "false";
        project.Properties["service.running"] = "false";
    
        System.ServiceProcess.ServiceController[] scServices;
        scServices = System.ServiceProcess.ServiceController.GetServices();
    
        foreach (System.ServiceProcess.ServiceController scTemp in scServices)
        {
         if (String.Compare(scTemp.ServiceName.ToUpper(), serviceName.ToUpper()) == 0)
         {
          project.Properties["service.exists"] = "true";
          project.Log(Level.Info, "Service " + serviceName + " exists");
          if (scTemp.Status.Equals(System.ServiceProcess.ServiceControllerStatus.Running))
           project.Properties["service.running"] = "true";
          project.Log(Level.Info, "Service " + serviceName + " is running: " + project.Properties["service.running"]);
          return;
         }
        }
        project.Log(Level.Info, "Service " + serviceName + " doesn't exist");
       }
      ]]></code>
     </script>
    </target>
    

    Usage:

    <property name="service.name" value="Selection.Service" />
    <call target="service_exists" />
    
    <servicecontroller action="Stop" service="${service.name}" machine="${host}" timeout="60000" if="${service.exists}"/>
    
    <!-- Do something -->
    
    <servicecontroller action="Start" service="${service.name}" machine="${host}" timeout="60000" if="${bool::parse(service.exists) and bool::parse(service.running) == true}"/>
    

    Hope I did not miss anything - our build admin keeps everything in one msbuild file which is now over 3600 lines :|

    From Dandikas

Apache POI HWPF - Output a table to Microsoft Word

I've been Googling for quite awhile and haven't found a definitive answer. Is it possible to output a table using Apache POI? It looks like it hasn't been implemented, since the main developer stopped working on it like 5 years ago.

Is there an open source alternative to POI that can do this?

  • I think you're right in that Apache POI is dead in the water. Clearly it wasn't glamourous enough.

    The only alternative that I'm aware of is iText, which can generate RTF documents, which MS Word (and every other similar application) can read. It includes full table support.

    And, of course, iText can generate PDF also.

    From skaffman
  • If docx and java are both ok for you, try docx4j

    From plutext

Binding custom functions to DOM events in prototype?

Jquery has a great language construct that looks like this:

$(document).ready(function() {
    $("a").click(function() {
        alert("Hello world!");
    });
});

As you might guess this, once the document has loaded, binds a custom function to the onClick event of all a tags.

The question is, how can I achieve this same kind of behavior in Prototype?

  • Event.observe(window, 'load', function() { 
         Event.observe(element, 'click', function() { 
             alert("Hello World!");
         });
    });
    

    Of course you need to "select" the elements first in Prototype.

  • This article gives a pretty good overview of Prototype's event library. I think, compared to jQuery, this is a stone age api. :)

    http://alternateidea.com/blog/articles/2006/2/8/working-with-events-in-prototype

    savetheclocktower : That's because the linked article is two years old. The API has evolved quite a bit since then. ;-)
  • @David

    Can you elaborate on "selecting the elements first"?

    Can I do this?

    Event.observe($$('a'), 'click', function(){
      alert('Hello World!');
    });
    
    From Mark Biek
  • Eriend

    I, so far, prefer a lot of things about Jquery as well. But I have a large Prototype code-base to work with. When in Rome...

    From Mark Biek
  • Prototype 1.6 provides the "dom:loaded" event on document:

    document.observe("dom:loaded", function() {
        $$('a').each(function(elem) {
            elem.observe("click", function() { alert("Hello World"); });
        });
    });
    

    I also use the each iterator on the array returned by $$().

    Erlend Halvorsen : Nice :) Seems Prototype has learned some new tricks since I last used it!
    From erlando
  • $(document).observe('dom:loaded', function() {
        $$('a').invoke('observe', 'click', function() {
            alert('Hello world!');
        });
    });
    
    seengee : this would be my solution also
  • @Mark Biek

    Event.on(document, 'click', 'a.greeter_class[rel]', function(event, elt) {
      alert("Hello " + elt.readAttribute('rel')); event.stop();
    });
    
    seengee : FYI this is Prototype 1.7 syntax which is still in beta

Writing ID3v2 Tag parsing code, need Good examples to test.

I am writing software to parse ID3v2 tags in Java. I need to find some files with good examples of the tag with lots of different frames. Ideally the tags will contain an embedded picture because that is what is kicking my butt right now.

Does anyone know where I can find some good free (legal) ID3v2 tagged files (ID3v2.2 and ID3v2.3)?

  • This maybe be obvious and not what you are seeking, but what about ripping some of your legally obtained CDs and editing them using iTunes? iTunes would also allow you to add embedded picture. There are of course many open source programs that will also do this.

    nickf : or Winamp, if you don't feel like installing all of Apple's cruft.
    Hugh Allen : Even Winamp sneakily installs a kernel-mode driver, last time I checked :-/
    From Kevin Lamb
  • You can create the example files with a tagger by yourself. I'm the author of the Windows freeware tagger Mp3tag which is able to write ID3v2.3 (UTF-16 and ISO-8859-1) and ID3v2.4 tags in UTF-8 (both along with APIC frames). You can find a list of supported frames here.

    To create ID3v2.2 tags, I think the only program out there is iTunes which interpretes the ID3 spec in it's very own way and writes numerous iTunes specific frames that are not in the spec.

    From fhe

Find running clickonce deployed single instance app?

Hi,

I'm currently facing a problem with my single instance clickonce deployed app, and maybe someone has a good idea about it.

My app is called by a web application, which passes parameters to this exe. Now the problem is, if the application is already running, it takes quite a long time, until the new instance is started, the parameters are handed over to the running instance, and closed (with opening the url, checking for updates,...).

So is there a way to detect that there is a running instance, without introducing a new small app, which does this detection?

The goal is to descrease the time, which the second, third, ... call needs to get the parameters to the running instance.

tia Martin

  • When you setup the deployment setting, you can tell VS to only let the application update every x amount of time (once a day, week, etc). You can also tell it to launch the application and update in the background. Both these would solve your problems on their own I think.

    The settings are in the Projects settings, on the Publish tab. Click the "Update" button in the "Install mode and settings" section and set appropriate settings.

  • I think I made it not clear, what I'm trying to achieve.

    What I'm trying to do is, if there is an instance running, accessing this one directly, without starting the clickonce url. I searching for a solution, where I don't have to write a little program (which has to be deployed as well, ...), which checks if the app is running, if yes hand over the params, if not starting the clickonce url.

    The background update is not really an option, because this "connecting to app" screen is still there and consuming time, and it's a must, that every user is running at every time the most recent version of the app.

    Matthew Scharley : Then there's no solution. It must bring up the updating window before running the application. You don't have to defer checking for updates if you don't want to, you can still have it check in the background everytime the program executes, and that way it skips straight to opening the program.
    Matthew Scharley : Basically, you're asking for an application to stay up to date, but you don't want it to update itself. Sadly, this still isn't possible, much as I wish it was with our Australian internet connections :(
  • This seems an interesting use of Click-Once technology. I was under the impression that Click-Once is ideal for for distributing a client application to a multiple end-user machines within an enterprise.

    In the situation described here, this is a background application used by a web-server application - which I would expect to be only installed on a few servers in the enterprise.

    Questions I have are:

    • How would your web application pass the parameters to the running instance if the web application could detect it? (eg .NET remoting?)
    • What's your reason for distributing this background application via Click-Once (as opposed to a windows installer)?

    Knowing this might help to resolve your issue.

  • No it's not a background app. The Web app and the Winforms app are working with a similar subset of the database. I'm trying not to go into details, because it's not important for the question, but to make it more clear: With the Web-app the users are creating the meta data for our business case, and with the Winforms app the users doing their concrete work.

    So with this link, it is possible to create a new set of meta data, and cross-check the result in the "working-app".

    So there are 2 concrete scenarios:

    1. The Winforms-App is not running on the client: When users clicks on the click-once start menu entry, or the link in the web app, everything should be done in the way it is now (with update check, ....). So this scenario works for me.

    2. The Winforms-App is running on the client: The running instance should display the new set of meta data as quickly as possible, without any click-once update check, or whatever. I'm trying to bypass in this scenario, that the "click-once starting app" dialog is poping up, the new app instance is starting, the new instance is passing the parameters to the running instance and closing itself. So I searching for a solution, where I achieve that, without creating a new small exe, which is known by the web app, which does the work.

  • http://johnmelville.spaces.live.com/blog/cns!79D76793F7B6D5AD!122.entry may help......

    From abmv

How do you deal with large dependencies in Boost?

Boost is a very large library with many inter-dependencies -- which also takes a long time to compile (which for me slows down our CruiseControl response time).

The only parts of boost I use are boost::regex and boost::format.

Is there an easy way to extract only the parts of boost necessary for a particular boost sub-library to make compilations faster?

EDIT: To answer the question about why we're re-building boost...

  1. Parsing the boost header files still takes a long time. I suspect if we could extract only what we need, parsing would happen faster too.
  2. Our CruiseControl setup builds everything from scratch. This also makes it easier if we update the version of boost we're using. But I will investigate to see if we can change our build process to see if our build machine can build boost when changes occur and commit those changes to SVN. (My company has a policy that everything that goes out the door must be built on the "build machine".)
  • Unless you are patching the boost libraries themselves, there is no reason to recompile it every time you do a build.

  • First, you can use the bcp tool (can be found in the tools subfolder) to extract the headers and files you are using. This won't help with compile times, though. Second, you don't have to rebuild Boost every time. Just pre-build the lib files once and at every version change, and copy the "stage" folder at build time.

    From vividos
  • Precompiled headers are the word of the day! Include the boost headers you need in your precompiled header - tada!

  • We're using Boost, but we only include object files for those types that we actually use. I.e., we build our own static library with a bunch of home-grown utilities and include those parts of Boost that we find useful. Our CMakeLists.txt looks something like this (you should be able to load this in CMake, if you adjust SOURCES accordingly.)

    project( MyBoost )
    
    set(SOURCES 
      boost/regex/src/c_regex_traits.cpp
      boost/regex/src/cpp_regex_traits.cpp
      boost/regex/src/cregex.cpp
      boost/regex/src/fileiter.cpp
      boost/regex/src/icu.cpp
      boost/regex/src/instances.cpp
      boost/regex/src/posix_api.cpp
      boost/regex/src/regex.cpp
      boost/regex/src/regex_debug.cpp
      boost/regex/src/regex_raw_buffer.cpp
      boost/regex/src/regex_traits_defaults.cpp
      boost/regex/src/static_mutex.cpp
      boost/regex/src/usinstances.cpp
      boost/regex/src/w32_regex_traits.cpp
      boost/regex/src/wc_regex_traits.cpp
      boost/regex/src/wide_posix_api.cpp
      boost/regex/src/winstances.cpp
    )
    
    add_library( MyBoost STATIC ${SOURCES})
    
    From JesperE

SMS alerting to respond to error situations faster

What is the easiest way to set up an SMS alerting system so that I will receive notification if my server doesn't respond or a GET query doesn't return correct content?

  • You can get a service like http://www.serviceuptime.com/ and the send an email to your-number@a-domain-your-provider-gives They usually have the exact domains for the providers on their respective websites but you could just try @t-mobile.com if your provider is t-mobile for example.

    If you want to write your own tool should be pretty straightforward - send GET request if you dont get the expected response send email. You should run it from 2 different locations from 2 different ISPs tho because if there are routing problems and the request doesnt go through you'll get smsed

    From Svet
  • What you want is an SMS gateway. There are surely some service providers local to you. Unfortunately, they are a bit hard to find. Try asking Google...

  • Often I've found that what you need is a SMS modem attached directly to your monitoring server. What if the problem is the network connection?

    From svrist
  • You can completely outsource the monitoring and alerting, for example by subscribing to Pingdom. The most basic Pingdom plan will monitor up to 5 services (including using HTTP GETs with optional content pattern matching) and send an SMS after a configurable amount of downtime.

    If you want to roll your own solution, the first thing to get right is the monitoring software. Many, many third party solutions exist for this, from the free to the outrageously expensive. Of course, rolling your own "try a GET and do something if it doesn't work" script is always an option, but as with all software, feature creep may mean that you're just re-implementing existing solutions soon...

    For the SMS notification, using an e-mail-to-SMS gateway might work for you, if you're in a territory well-served by such services. Most US providers, for example, provide free gateways, whereas in Europe they tend to be run by third parties and pay-only. As already suggested, Google is your friend here. Also, your monitoring tool may support sending notifications directly using a cellphone/GSM modem attached to the monitoring server.

    Bemmu : Accepted because Pingdom seems to be exactly what I was looking for.
    From mdb

python dictionary update method

I have a list string tag.

I am trying to initialize a dictionary with the key as the tag string and values as the array index.

for i, ithTag in enumerate(tag):
    tagDict.update(ithTag=i)

The above returns me {'ithTag': 608} 608 is the 608th index

My problem is that while the i is being interpreted as a variable, Python is treating the "ithTag" as a string instead of a variable.

I'm confused, it is kind of hard to google these kind of specific questions. I hope I worded the title of this question correctly,

Thanks!

  • You actually want to do this:

    for i, tag in enumerate(tag):
        tagDict[tag] = i
    

    The .update() method is used for updating a dictionary using another dictionary, not for changing a single key/value pair.

    From Jerub
  • I think this is what you want to do:

    d = {}
    for i, tag in enumerate(ithTag):
       d[tag] = i
    
  • Try

    tagDict[ithTag] = i
    
  • Thanks!

    for i, tag in enumerate(tag): tagDict[tag] = i

    From freshWoWer
  • If you want to be clever:

    tagDict.update(enumerate(tag))
    

    Thanks to Brian for the update. This is apparently ~5% faster than the iterative version.

    Brian : Actually, update() can take a sequence directly, so there's no need to construct an intermediate dict. Doing tagDict.update(enumerate(tag)) is actually slightly (~5%) quicker than the iterative version.
    From Claudiu
  • It's a one-liner:

    tagDict = dict((tag, i) for i, tag in enumerate(tag))
    

C#/.NET, what to look at?

I see lots of job ads for C#/.NET programmers, so I thought it could be a good idea to have had a look at it.

After looking at a few tutorials I found nothing really new to me. Just a language with a syntax somewhere between Java and C++ (arguably nicer than both though).

So, what features in particular should I look at? What are some special features? What's the reason that C#/.NET is so large? What are some killer features or perhaps some really evil language gotchas?

Links and code examples are very welcome.

I am using the Mono-implementation on Linux.

  • The .Net Framework library is more important than the language.

    Jon Skeet : It's the combination which is important. LINQ would be significantly less useful without extension methods, for example.
  • In C# 3.0 Linq (Language Integrated Query) is worth looking at.

    From Ash
  • You can find some of the not so obvious features here

    http://stackoverflow.com/questions/9033/hidden-features-of-c

    And yes, the framework is the largest selling point.

  • Exception handling, garbage collection, reflection, a unified type system, machine architecture independence and performance are the main advantages the .NET CLR. The Base Class Libraries are quite comprehensive and comprehensible. Both C# and VB.NET are first class languages for building applications on this platform. Consider learning both.

    From Hafthor
  • Compared with Java:

    • The "using" statement (try/finally is rarely explicit in C#) (C# 1)
    • Delegates as a first class concept (C# 1)
    • Properties and events as first class concepts (C# 1)
    • User-defined value types (C# 1)
    • Operator overloading (use with care!) (C# 1)
    • Iterator blocks (C# 2)
    • Generics without type erasure (C# 2)
    • Anonymous methods (C# 2)
    • Partial types (good for code generation) (C# 2)
    • Object and collection initializers (C# 3)
    • Lambda expressions (C# 3)
    • Extension methods (C# 3)
    • Expression trees (C# 3)
    • Query expressions (aka query comprehensions) (C# 3)
    • Anonymous types (mostly used in query expessions) (C# 3)

    They're the things I miss from C# when I write in Java, anyway. (That's not an exhaustive list of differences, of course.) Which ones are most important to you is subjective, of course. From a simple "getting things done" point of view the using statement is probably the single biggest pragmatic gain, even though it only builds a try/finally block for you.

    EDIT: For quick examples of the C# 2 and 3 features, you might want to look at my Bluffer's Guide to C# 2 and the equivalent for C# 3.

    kigurai : Thanks for the solid answer!
    From Jon Skeet
  • Killer feature: super fast Windows programming with Visual Studio.

    From supermedo

Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)

Hello,

I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format.

The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live.

What are my options? Anyone had experience with this?

  • You can probably check the code for Sphider. They docs and pdfs, so I'm sure they can read them. Might also lead you in the right direction for other Office formats.

  • The Office 2007 file formats are open and well documented. Roughly speaking, all of the new file formats ending in "x" are zip compressed XML documents. For example:

    To open a Word 2007 XML file Create a temporary folder in which to store the file and its parts.

    Save a Word 2007 document, containing text, pictures, and other elements, as a .docx file.

    Add a .zip extension to the end of the file name.

    Double-click the file. It will open in the ZIP application. You can see the parts that comprise the file.

    Extract the parts to the folder that you created previously.

    The other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats.

    If you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success.

    Brian : This is wonderful. Thanks for the insight.
  • I have successfully used the OpenXML Format SDK in a project to modify an Excel spreadsheet via code. This would require .NET and I'm not sure about how well it would work under Mono.

    From Hafthor
  • The python docx module can generate formatted Microsoft office docx files from pure Python. Out of the box, it does headers, paragraphs, tables, and bullets, but the makeelement() module can be extended to do arbitrary elements like images.

    from docx import *
    document = newdocument()
    
    # This location is where most document content lives 
    docbody = document.xpath('/w:document/w:body',namespaces=wordnamespaces)[0]
    
    # Append two headings
    docbody.append(heading('Heading',1)  )   
    docbody.append(heading('Subheading',2))
    docbody.append(paragraph('Some text')
    
    From nailer

Tracing versus Logging and how does log4net fit in?

I am wondering about what the difference between logging and tracing is.

Is the difference basically that tracing is more detailed log giving developers a tool to debug applications at runtime?

I have been experimenting with log4net and doing logging. Now I am wondering if I should be doing tracing as well and if I could/should use log4net for that purpose. Should I be doing tracing with log4net and is there some trace level for log4net loggers? Should I use a different log level for debug and trace purposes or is it ok to use the same? Can you give a simple example on how I would do logging and tracing for a simple method?

Edit: Despite a few helpful answers below I am still unsure how I should be doing tracing versus logging.

I have the following method in my Business layer and I want to add logging/tracing to it. I am wondering how to do it efficiently. Is the following method acceptable in terms of logging/tracing? Should the log messages be of type Info instead of Debug? Are the Debug messages I am logging considered trace? How would you change it?


IEnumerable<Car> GetCars()
{
   try
   {
      logger.Debug("Getting cars");
      IEnumerable<Car> cars = CarAccessor.GetCars().ConvertAll(DataAccessToBusinessConverter);
      logger.Debug("Got total of " + cars.Count + " cars"); 
   } catch (Exception e) {
      logger.Error("Error when getting cars", e);
      throw new Exception("Unexpected error when getting cars");
   }
}

  • logging != debugging

    Sometimes keeping log files is necessary to solve issues with the client, they prove what happened on the server side.

    From
  • I'd say yes. Logging is the only way to determine what happened in the past - if a customer calls and says something didn't happen as expected, without a log all you can do is shrug and try and reproduce the error. Sometimes that is impossible (depending on the complexity of the software and the reliance on customer data).

    There is also the question of logging for auditing, a log file can be written containing information on what the user is doing - so you can use that to narrow down the possibilities to debug a problem, or even to verify the user's claims (if you get a report that the system is broken, xyz didn't happen, you can look in the logs to find out the operator failed to start the process, or didn't click the right option to make it work)

    Then there's logging for reporting, which is what most people think logging is for.

    If you can tailor the log output then put everything in logs and reduce or increase the amount of data that gets written. If you can change the output level dynamically then that's perfect.

    You can use any means of writing logs, subject to performance issues. I find appending to a text file is the best, most portable, easiest to view, and (very importantly) easiest to retrieve when you need it.

    From gbjbaanb
  • log4net is well suited for both. We differentiate between logging that's useful for post-release diagnostics and "tracing" for development purposes by using the DEBUG logging level. Specifically, developers log their tracing output (things that are only of interest during development) using Debug(). Our development configuration sets the Level to DEBUG:

    <root>
            <level value="DEBUG" />
            ...
    </root>
    

    Before the product is released, the level is changed to "INFO":

    <level value="INFO" />
    

    This removes all DEBUG output from the release logging but keeps INFO/WARN/ERROR.

    There are other log4net tools, like filters, hierarchical (by namespace) logging, multiple targets, etc., by we've found the above simple method quite effective.

    Xerx : So I take it that the difference between tracing and logging is just an aesthetic difference log entries in level debug is indeed tracing?
    Bob Nadler : Most of the time I think that's true. There are specialized situations, like tracking real-time device interfaces for example, where a general purpose tool like log4net might not be the best choice. BTW: We use DEBUG because it's a predefined Level. You can also define your own level(s): e.g TRACE
    From Bob Nadler
  • IMO...

    Logging should not be designed for development debugging (but it inevitably gets used that way)
    Logging should be designed for operational monitoring and trouble-shooting -- this is its raison d’ĂȘtre.

    Tracing should be designed for development debugging & performance tuning. If available in the field, it can be use for really low-level operational trouble-shooting, but that is not its main purpose

    Given this, the most successful approaches I've seen (and designed/implemented) in the past do not combine the two together. Better to keep the two tools separate, each doing one job as well as possible.

  • Logging is the generic term for recording information- tracing is the specific form of logging used to debug.

    In .Net the System.Diagnostics.Trace and System.Diagnostics.Debug objects allow simple logging to a number of "event listeners" that you can configure in app.config. You can also use TraceSwitches to configure and filter (between errors and info levels, for instance).

    private void TestMethod(string x)
    {
        if(x.Length> 10)
        {
            Trace.Write("String was " + x.Length);
            throw new ArgumentException("String too long");
        }
    }
    

    In ASP.Net, there is a special version of Trace (System.Web.TraceContext) will writes to the bottom of the asp page or Trace.axd. In ASP.Net 2+, there is also a fuller logging framework called Health Monitoring.

    Log4Net is a richer and more flexible way of tracing or logging than the in-built Trace, or even ASP Health Monitoring. Like Diagnostics.Trace you configure event listeners ("appenders") in config. For simple tracing, the use is simple like the inbuilt Trace. The decision to use Log4Net is whether you have more complicated requirements.

    private void TestMethod(string x)
    {
        Log.Info("String length is " + x.Length);
        if(x.Length> 10)
        {
            Log.Error("String was " + x.Length);
            throw new ArgumentException("String too long");
        }
    }
    
    ronaldwidha : It's probably worth pointing out that Log4Net has log4net.Appender.TraceAppender which outputs to Visual Studio Output window the same way as Trace class does.
    From martin
  • Also, consider what information is logged or traced. This is especially true for senstive information.

    For example, while it may be generally OK to log an error stating

    "User 'X' attempted to access but was rejected because of a wrong password",

    it is not OK to log an error stating

    "User 'X' attempted to access but was rejected because the password 'secret' is not correct."

    It might be acceptable to write such senstive information to a trace file (and warn the customer/user about that fact by "some means" before you ask him to enable trace for extended troubleshooting in production). However for logging, I always have it as a policy that such senstive information is never to be written (i.e. levels INFO and above in log4net speak).

    This must be enforced and checked by code reviews, of course.

  • Logging is not Tracing. These two should be different libraries with different performance characteristics. In fact I have written one tracing library by myself with the unique property that it can trace automatically the exception when the method with tracing enabled is left with an exception. Besides this it is possible to resolve in an elegant way the problem to trigger exceptions in specific places in your code.

Query Parse error in shopping kart application

In my shopping cart app, I execute a query using:

44: @mysql_query("insert into cart(cookieId, itemId, qty) values('" . GetCartId() . "', $itemId, $qty)");

But when I view the webpage, I get the following error: Parse error: parse error, unexpected T_STRING, expecting '{' in H:\Program Files\EasyPHP 2.0b1\www\beta\cart.php on line 44

  • Can you include a bit more context, e.g. lines 40-44? The error may well be earlier on in the code (line 44 looks fine to me). Also, please use pre-formatting if possible.

    From Bobby Jack
  • You (probably) have a mismatched ' or " somewhere before line 44.

    From pmg
  • Sure.

    I have a webpage that lists the products that can be "added to the cart", In it there is a table with this in one of the td:

    < a href="cart.php?action=add_item&id=&qty=1">Add Item

    In the next webpage (cart.php), I have a function to add the item:

    function add_item($itemId, $qty)

    $result = mysql_query("select count from cart where cookieId = '" . GetCartId() . "' and itemId = $itemId");

    $row = mysql_fetch_row($result); $numRows = $row[0];

    if($numRows == 0) { @mysql_query("insert into cart(cookieId, itemId, qty) values('" . GetCartId() . "', $itemId, $qty)"); }

  • You need a { before the start of the function body

    function add_item($itemId, $qty)
    
    { /* HERE */
    
    $result = mysql_query(/* ... */);
    
    From pmg
  • Thanks, I added in the brackets but now I get these errors: They look like database connectivity errors, but I was able to connect to MySQL in a previous page using the same credentials.

    Warning: mysql_query() [function.mysql-query]: Access denied for user 'ODBC'@'localhost' (using password: NO) in H:\Program Files\EasyPHP 2.0b1\www\beta\cart.php on line 33

    Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in H:\Program Files\EasyPHP 2.0b1\www\beta\cart.php on line 33

    Warning: mysql_fetch_row(): supplied argument is not a valid MySQL result resource in H:\Program Files\EasyPHP 2.0b1\www\beta\cart.php on line 35

  • Try this:

    $result = mysql_query(/* ... */)
        or die('SQL Error @ ' . __FILE__ . ':' . __LINE__ . ' [' . mysql_error() . ']');
    

    Do the same to the mysql_connect() line.

    From pmg
  • I have the same problem, and that didn't solve the problem, can anyone help?

  • You probably need pass or make global the mysql connection into your function as mysql_query() is unable to find an existing connection. I always do this to ensure I have the right connection. I'm not sure what differences there maybe with mysql_query() being able to find the connection from within a function.

  • I have my connection to MySQL in a function in a separate file. That file is require_once'd in every php file that needs to connect to the database. Sometimes I run queries directly from the page, other times I run them from yet another function. It always worked for me from PHP 4.something to PHP 5.2.5

    Configuration

    <?php
    // config.inc.php
    define('CONFIG_DBSERVER', 'myserver');
    define('CONFIG_DBUSER', 'username');
    define('CONFIG_DBPASS', 'password');
    define('CONFIG_DATA', 'database');
    ?>
    

    Connection

    <?php
    // dbfx.inc.php
    function db_connect($server, $user, $pass, $db) {
      $con = mysql_connect($server, $user, $pass);
      if ($con) {
        if (!mysql_select_db($db)) return false;
      }
      return $con;
    }
    /* ... */
    

    Use

    <?php
    require_once 'config.inc.php';
    require_once 'dbfx.inc.php';
    /* ... */
    $con = db_connect(CONFIG_DBSERVER, CONFIG_DBUSER, CONFIG_DBPASS, CONFIG_DATA);
    if (!$con) die('Error: ' . mysql_error());
    /* ... */
    
    From pmg