Thursday, March 3, 2011

Best nntp to web gateway?

My company uses usenet groups on an internal nntp server and I would like to add a web server to this that would allow the usual browsing and searching but in addition provide an archive of old messages that may have expired on the server. This is mainly for searching the archives so ability to post is not important.

Can anyone recommend a piece of software the would act as such a gateway? Most of the stuff I found on Google appears to be either no longer maintained or doesn't offer the archive ability.

From stackoverflow
  • Gmane seems to be nice. You could also look into a NNTP -> Email thing and use any of the available email web archive software.

  • How about FUDforum? For details see http:///fudforum.org/

How can I force my SharePoint AjaxSmartPart to use the smaller "Release" mode javascript?

I use ReturnOfSmartPart on WSS3 to display a ASCX in Sharepoint as a web part. This works beautifully.

The only problem is that the AjaxSmartPart seems to be using the Debug mode javascript (the scripts are close to 1MB!)

How can I ensure that the AjaxSmartPart only uses the "Release" mode java scripts?

Note:

  • I have Published my ASCX in Release mode.
  • Debug="false" in my Sharepoint web.config
From stackoverflow
  • Simple, set your web.config Debug mode to false. This should already work.

    If you wish to override the behaviour, put the following code in your OnPreRender method of your ASCX...

    protected override void OnInit(EventArgs e)
    {
        ScriptManager sm = ScriptManager.GetCurrent(this.Page);
        sm.ScriptMode = ScriptMode.Release;
    
        base.OnInit(e);
    }
    

Http Web Service Load Test

I have a web service I'm trying to load test. I created a program that spawns threads to simulate multiple users connecting to the service and making a request. Each thread makes "some number" of requests before it dies. Each new request is made as soon as it is ready - there is little to no delay between getting a response and making the next request. The web service under test calls another web service on the same server.

This process seems to work fine up to 90 simulated users. Once I try 100 simulated users however, the programs hang after about 6 total processed requests. They don't seem to be doing anything. The simulated clients are hung waiting for responses and the web service is not receiving the requests. It takes a server restart to recover.

What's going on here? Is there a limit to the number of connections I can make to the server? Should I attempt to "ramp up" the load (right now I'm just firing them as fast as I can)?

The server I'm using us Java Caps 5.1.3 application server and the library I'm using to make the requests is HttpUnit 1.6.2.


Followup Question What's the benefit of the ramp up time? Why can't I just push all the load to the server at the start of the test?

From stackoverflow
  • See there can be multiple causes for this happening, you can try the approach below to figure out the cause. Hope they help:

    1. Use Ramp-up

    Use a decent ramp-up, say atleast 1user/2 seconds or such. If using Ramp-up solves your problem, then its definitely a Connection Count issue.

    2. Code Review

    Thoroughly check the load-injection code you have written,for unexited loops/thread in some cases.

    You can also use a profiling utility, e.g. JENSOR to find out which method is going into a deadlock and causing the server to be unresponsive.


    Also, Check these parameters on your web-server and tweak them and test

    • MaxThreads
    • MaxProcesses
    • MaxSessionCount


    Answer for Follow-up Question

    The ramp-up simulates a real-life scenario, and at the same time gives a breathing space to the Web Server. When doing load-testing, the pattern should be as similar to the real life to get accurate and scalable predictions.

    The parameters which play the most important part in doing this are:

    • Ramp-up
    • Think Time
    • Pacing b/w iterations
    • Transaction Mix
    • No. of concurrent users
  • I think you should try using JMeter for load testing. It has all the ramp-up stuff. This PPT presentation compares the two, so you can see which fits you better.

  • Answer for Follow-up Question

    The ramp-up simulates a real-life scenario, and at the same time gives a breathing space to the Web Server. When doing load-testing, the pattern should be as similar to the real life to get accurate and scalable predictions.

    The parameters which play the most important part in doing this are:

    • Ramp-up
    • Think Time
    • Pacing b/w iterations
    • Transaction Mix
    • No. of concurrent users

modifying JavaScript rollover script

Hi all,

I'm trying to implement this JavaScript code on blogspot (which parses XML code, some code works better than others)

 </head><body>
<div class="navbar section" id="navbar"><div class="widget Navbar" id="Navbar1"><script type="text/javascript">
    function setAttributeOnload(object, attribute, val) {
      if(window.addEventListener) {
        window.addEventListener("load",
          function(){ object[attribute] = val; }, false);
      } else {
        window.attachEvent('onload', function(){ object[attribute] = val; });
      }
    }
  </script>

That code results in rollover just fine with the following snip:

<img xsrc="/1.jpg" class="domroll /1flip.jpg" src="/1.jpg">

I'd like to modify the code so that it would replace a different image rather than the one it's currently under mouseover, but so far without luck. (perhaps by naming them and passing the names as variables)

Can anyone help?

thanks

From stackoverflow
  • The snippet of JavaScript you have posted doesn't have anything to do with image rollovers, and doesn't produce the second snippet.

    The first snippet allows you to change the attribute of an DOM node onLoad in a cross browser fashion. You might want to edit your question to put up the appropriate snippet.

Google Maps Polyline speed in Internet Explorer

Is there a way to speed up the rendering of Polylines in Google Maps when using Internet Explorer (7)? My map loads quickly in other browsers.

I've encoded the Polylines as described here - http://facstaff.unca.edu/mcmcclur/GoogleMaps/EncodePolyline/ and they are loaded from a static JavaScript file.

From stackoverflow
  • The ie7 is slower than firefox3 (and ie6 is slower than ie7) to render large and complex polylines with javscript.

    There could be some minor tricks to speed up the rendering, (smooth the lines before rendering, adjusting for zoomlevel and such).

    One trick is maybe to create a kml instead and leave on a public url and use that in google maps. A more "enterprise" trick is to use geoserver or mapserver and create overlays. That would really boost performance.(They are truly serverside)

  • This is probably because ie doesn't support canvas or svg or whatever it is that google maps uses to draw the lines. To get around this, the line data is sent to google, and they turn it into an image that is then downloaded and displayed.

  • Tom, google maps uses VML to draw the lines in IE. Firefox uses SVG. The image transformation is for browsers not supporting SVG or VML.

What are some ideas for writing code samples for job applications? What would you write?

No, this isn't a 'plz send me teh codez' question, before you start!

I've been looking for a new job lately, and I've found that a lot of advertisements (well, the ones for the places that I wouldn't mind working at), ask for code samples in addition to a CV. I've spent some time thinking about this, as my day job is writing PHP/HTML/CSS CRUD apps and most of the code I've written is fairly mundane. I don't really have anything I can point to that is particularly impressive or interesting in and of itself. Maybe that means I'm a terrible developer or something, I don't know.

My question is - what type of thing would you write for as a sample of your work? Would you bother writing something special at all, or just submit something mundane you've already written like I mentioned? Is there anywhere I can get ideas for something interesting to do? I was thinking I might solve one of the harder Project Euler problems, but (a) that doesn't really relate to writing CRUD apps and (b) PHP kinda sucks for Project Euler.

Thanks :)

From stackoverflow
  • Choose a piece of code you've written that solved a non-trivial or interesting problem. Describe the problem you faced, and your subsequent approach to solving it in code.

    Don't send something 'mundane'.

  • Write a personal page oriented to show your professional profile, with a backend. Make it the prettiest you can, and open the code. Include this website in your CV, and put a link in it to download the code.

    More complex, write a system to install a temporary version of this website. Then, your interviewer could try your application.

  • You have to ask yourself what kind of thing do you think your potential employers are looking for in the code sample. In my experience (of interviewing candidates) given a code sample I look for a good clean coding style, sensible names for functions/variables and something that demonstrates a reasonable understanding of the language (clever use of a particular feature or something that shows more advanced knowledge). I don't actually look too closely at what the code does - we test for that separately. However, you should be prepared to talk about the code, explain why you took a particular approach and how if you has more time you would like to expand or improve on what you did.

    I suggest you pick a reasonably small and fun problem and tackle that.

  • I think it is the best to send a sample of code from a real application you wrote, not something you made up for the interview. The code is supposed to give the interviewers some insights in how you program, and will work as a conversation starter - they will possibly fire questions about technical details, your opinion on certain matters, design decisions, etc. If you write something specifically for the interview, that will probably not work well. At least it will send the message that the code you write normally isn't worth sending over - which is probably quite the opposite from what you want.

    So pick a piece of code that is well written and clear. If that's (part of) a simple CRUD-application, that's OK, because that's what you've been building. There don't need to be any spectacular algorithms in there or anything - that's not what they want to know. If they do, they'll ask for that (in the interview).

  • Whatever code you send, make sure it has:

    1. Written description of why, how, when, what-for etc.
    2. Unit tests.
  • See if you can't show off the structure of your PHP applications. Using a relatively unstructured language like PHP to put together an MVC design, etc might be pretty impressive to some employers.

  • I got my present job partly because I had (and still have) a programming weblog. The CTO noticed that one of the first postings was about SNOBOL (one of his favourite languages) and told the CEO that they had to employ me.

  • This is a great question! I started to answer it here and found it so interesting that I decided to write a whole blog post on the topic, which you can find here: "The Programmer’s Guide to Getting Hired: The Code Sample".

    Here is the quick and dirty version:

    1. Code samples are more frequently used to filter people out than to assess any programming skill. It is just too hard to tell much more than basic competency in such a small amount of work. As such, I suggest keeping it simple and short so you don't give yourself enough rope to hang yourself with. People are opinionated about code and generally hate anything that was written by someone else, so don't give them a lot of nitpick fodder.
    2. Whatever you submit, be prepared to defend it, optimize it, and talk intelligently about any technology you used.
    3. I suggest (for reasons I explain in the article) that you write a fresh code sample and don't submit something you wrote for work.
    4. Readability and accuracy are the most important factors. Even more than what the code actually does.
    5. If you want to impress, don't use clever tricks. It is better to show off which technologies you know how to use correctly and your ability to apply good technique (Design patterns, OOP, etc) than how well you can game a compiler.

Convert string to code in Scheme

How do I convert a string into the corresponding code in PLT Scheme (which does not contain the string->input-port method)? For example, I want to convert this string:

"(1 (0) 1 (0) 0)"

into this list:

'(1 (0) 1 (0) 0)

Is it possible to do this without opening a file?

From stackoverflow
  • From this similar question on comp.lang.scheme you can save the string to a file then read from it.

    That might go something like this example code:

    (let ((my-port (open-output-file "Foo")))
      (display "(1 (0) 1 (0) 0)" my-port)
      (close-output-port my-port))
    
    (let* ((my-port (open-input-file "Foo"))
           (answer (read my-port)))
      (close-input-port my-port)
      answer)
    
  • Scheme has procedure read for reading s-expressions from input port and you can convert a string to input stream with string->input-port. So, you can read a Scheme object from a string with

    (read (string->input-port "(1 (0) 1 (0) 0)"))
    

    I don't have Scheme installed, so I only read it from reference and didn't actually test it.

  • From PLT Scheme manual:

    (open-input-string string [name-v]) creates an input port that reads bytes from the UTF-8 encoding (see section 1.2.3) of string. The optional name-v argument is used as the name for the returned port; the default is 'string.

    Svante : This looks like abstraction inversion in Scheme. Common Lisp has a `read-from-string` function.
    Anton Nazarov : Yes, it seems - read reads only from input ports, and opening input port for reading from file is in R5RS, but making input port for reading from string is not. So it is the source of question. Mit scheme has string->input-port, PLT - open-input-string.
  • Many schemes have with-input-from-string str thunk that executes thunk in a context where str is the standard input port. For example in gambit scheme:

    (with-input-from-string "(foo bar)" (lambda () (read)))

    evaluates to:

    (foo bar)

    The lambda is necessary because a thunk should be a procedure taking no arguments.

    JBF : And since `read` is a procedure taking no arguments you could shorthen the above example to `(with-input-from-string "(foo bar)" read)` .

Recursive Make - friend or foe?

I'm using (GNU) Make in my project. I'm currently putting one makefile per directory and specify the subdirectories using SUBDIRS. It's been suggested to me that this is not the ideal way of using make, that using a one toplevel make file (or several, split up using include). I've tried migrating/using this layout in the past, but it appears to me that it's unnecessary complicated.

Which are the benefits/drawbacks of using recursive makefiles?

From stackoverflow
  • The benefit that I've gotten from this in the past is that it's easier to build files in a single subdirectory. You can do this with dependencies, but it's a bit more work to keep all of the targets straight. Basically, this makes it easier to make changes and test one library without having to deal with the full complexity of the larger project.

    Johan Dahlin : You're right. Easier reusage of common targets is another advantage
  • The issue with recursive make is the time overhead of evaluating all the different make files vs. evaluating one large make file. Part of this is just spawning processes but also (IIRC) you tend to be forced into assuming that other makes files did something and rebuilding when you don't really need to.

    My take on it is to have a single make file per "Unit", that more or less amounts to having a make file for each chunk of code that you expect could be used on it's own (e.g. as an independent library)

    OTOH my current project breaks this all over the place as I'm generating make files during the build. :b

    JesperE : No, time is not the biggest problem. The biggest problem is partitioning the dependency tree into several dependency trees, which prevents you from properly expressing dependencies across sub-makes.
    BCS : IMNSHO the only reason you even care about dependencies at all *is* time. If you don't care about time, just do a from scratch, blank slate rebuild every time. You can get "correct" dependencies expressed by coding so if anything changes, it's assumed that everything changed, but it takes more time.
  • An article entitled "Recursive Make Considered Harmful" can be found here: http://miller.emu.id.au/pmiller/books/rmch/?ref=DDiyet.Com. (Or at the Aegis project at SourceForge.)

    It explores the problems with recursive makefiles, and recommends a single-makefile approach.

    Johan Dahlin : That's a good reference, thanks for pointing it out.
    Dana the Sane : Unfortunately, the link to the actual paper is giving a 500.
    caspin : The link to paper seem to be ok now.
  • To throw in a third option, you could use GNU Autotools. Mostly used for other reasons, but may also helpful at organizing a multi-directory build.

    http://www.lrde.epita.fr/~adl/autotools.html

    It has to be noted, though, that the result is a recursive version.

    Johan Dahlin : I'm actually using autotools, but you can use both recursive and non-recursive mode.
  • The first thing you should keep in mind (just to eliminate any misunderstanding) is that we're not talking about a single vs. multiple makefiles. Splitting your makefile in one per subdirectory is probably a good idea in any case.

    Recursive makefiles are bad primarily because you partition your dependency tree into several trees. This prevents dependencies between make instances from being expressed correctly. This also causes (parts of) the dependency tree to be recalculated multiple times, which is a performance issue in the end (although usually not a big one.)

    There are a couple of tricks you need to use in order to properly use the single-make approach, especially when you have a large code base:

    First, use GNU make (you already do, I see). GNU make has a number of features which simplifies things, and you won't have to worry about compatibilities.

    Second, use target-specific variable values. This will allow you to have, for example, different values of CFLAGS for different targets, instead of forcing you to have a single CFLAGS in your entire make:

     main: CFLAGS=-O2
     lib: CFLAGS=-O2 -g
    

    Third, make sure you use VPATH/vpath to the full extent supported by GNU make.

    You also want to make sure that you do not have multiple source files with the same name. One limitation of VPATH is that it does not allow you to have target-specific VPATH definitions, so the names of your source files will have to co-exist in a single "VPATH namespace".

    Johan Dahlin : Great answer, thanks! I'm actually using VPATH even for a project using recursive make files, it makes like a lot easier, especially for srcdir != builddir builds.
  • Run, don't walk, to cmake.org and get Cmake, one of the best build tools available.

    You will still be using GNU make, but in this case CMake will generate the makefiles for you.

    I can't guarantee 100%, but I have yet to come across a case where it has not correctly handled dependencies between subdirectories correctly (ie the problem that plagues the recursive make). At the very least it is a lot easier to maintain Cmakefiles than makefiles. Highly recommended.

    Do not use GNU autotools - that way madness lies!

    Johan Dahlin : I already know GNU autotools too well and I'm getting increasingly old to learn a new system. Maybe in another 5 years :-)
    Kristopher Johnson : CMake is worth the effort needed to learn. It is much easier than Autotools, and it's cross-platform support is much better.

Which is better apply two conditions in nested If or using single with And ?

Nested If or single if with And operator, which is better approach?
Single If with And

if (txtPackage.Text = !string.Empty && txtPackage.Text == "abc")

{

//

}

Nested If

if (txtPackage.Text = !string.Empty)

{

  if (txtPackage.Text == "abc")

  {

  //

  }

}
From stackoverflow
  • Are you going to do something different in the 'nested if' example if, in fact, txtPackage.Text isn't empty but contains something other than "abc"?

    If you aren't, I'd ask why are you checking for string.empty at all?

    You could just write:

    if (txtPackage.Text == "abc")
    {
    
    //
    
    }
    

    and be done with it.

    Totally depends upon what you want to do in the end.

    Timothy Khouri : typo: "I'd asK why"... but other than that... very well said :)
  • You really need to define what you mean by "better".

    My style is to use one if and an AND if, like in your example, I'm testing the same thing for two different values.

    If the two tests are conceptually different, I'll probably nest them

    if (!user_option.work_offline) {
        if (no_current_connection) {
            start_connection()
        }
    }
    
  • +1 to itsmatt

    On the original question, I personally avoid nested ifs wherever possible, otherwise I'd end up with lots of arrow code.

    There are, however, exceptions to this mini-rule. If there is going to be different behaviour for each of the conditional outcomes, then nested ifs may be an answer. You need to carefully consider the impact of nesting so you don't write difficult to read (and therefore maintain) code.

  • I wasn't going to chime in, but seeing that some answers here seem to be about "I like my code to look like this"... I feel that I should say something :)

    "Better" means the code will execute faster, or it's more readable / extendable. You would want to nest your if's in the case that you would possibly have multiple checks that all have a common requirement.

    Example:

    if (myThingy != null)
    {
        if (myThingy.Text = "Hello") ...
    
        if (myThingy.SomethingElse = 123) ...
    }
    

    EDIT: It also needs to be said that nesting your IF's requires more CPU cycles (and is therefore "slower") than a single IF. On top of that, the order of your conditions can greatly increase performance.

    Exapmle again:

    if (somethingQuick() && somethingThatTakesASecondToCalculate()) ...
    

    is a LOT faster (of course) than

    if (somethingThatTakesASecondToCalculate() && somethingQuick()) ...
    

    Because if the first part of the IF fails, the second part won't even be executed, thus saving time.

    : In .NET, it doesn't matter if two IF's are nested or not, they are converted to the exact same MSIL code. Therefore, nesting won't require more CPU cycles. But of course, this is only when the end result is the same.
    Garry Shutler : In any case: HOLY PREMATURE OPTIMISATION BATMAN! I'd rather be concerned about my code being readable to begin with rather than stressing over individual CPU cycles.
    Timothy Khouri : I don't worry about CPU cycles as such... I was simply pointing out that "how my code looks" shouldn't be the first consideration. If one of my senior developers messed up the second example though... there would have to be some e'splainin to do :)
    configurator : "How your code looks", or more exactly, "readability" is most of the time the MOST important thing in code.
  • I prefer using conditional AND/OR operators when needed, instead of nested ifs. Looks less messy and makes for less lines of code.

    if (thisIsTrue) {
    if (thisIsTrueToo) doStuff();
    }
    

    is essentally same as:

    if (thisIsTrue && thisIsTrueToo) doStuff();
    

    if thisIsTrue is false, the second condition is not evaluated. Works also for || for conditional OR.

  • I think it depends on how you want it to flow, if you are only executing on true, true (or any other singularity) then one if statement is all you need.

  • I feel that it is better to avoid nested ifs.

    Sometimes, I even duplicate simple tests to avoid a nesting level.

    Example (python):

    # I prefer:
    if a and b:
        foo()
    elif a and not b:
        bar()
    elif not a and b:
        foobar()
    elif not a and not b:
        baz()
    
    # Instead of:
    if a:
        if b:
            foo()
        else:
            bar()
    else:
        if b:
            foobar()
        else:
            baz()

    Sometimes it is more natural to have an else-clause as the last part. In those cases, I typically assert the conditions of the else clause. Example:

    if a and b:
        foo()
    elif a and not b:
        bar()
    elif not a and b:
        foobar()
    elif not a and not b:
        baz()
    else:
        assert not a and not b
        baz()

  • It depends on what exactly you want to achieve. It's a logical question rather than a programming query. If you have a dependent condition i.e. If the first is TRUE and then test the second condition; if second TRUE then do something , if FALSE do something, in this case you need to use a nested if. But you need the state of both the conditions to do something then you can go with the operators.

  • In my opinion, you should use the style that makes the most sense for what you're testing for. If the two are closely coupled, you could test for both on the same line without a loss of clarity. Particularly easy when the program permits "if x == (1 OR 2)" constructions.

    On the other hand, if the two tests are disjointed, I'd prefer to separate them to make the logic more explicit.

How do I run client script after a postback?

I'm looking for something like OnClientClick for a button, but I want it to happen after the postback (instead of before). I'm currently using the following, which works, but it adds the script block over and over again, which is not what I want. I simply want to execute an existing client script function.

ScriptManager.RegisterClientScriptBlock( 
  this, 
  typeof( MyList ), 
  "blah", 
  "DraggifyLists();", 
  true );
From stackoverflow

Business Model for a Good Software App

I developed an application, working fine and in a closed beta stage for last 1.5 years, it's been 3 years on development. Quite stable, full-featured and polished as well. It's a software for a certain market and can compete with other applications (only 5-10 decent applications exist in the same market). Simply put I believe in it and beta tests results supports me.

What's the best way to make money out of this? I especially appreciate any comment from who has been through this, any good or bad experience.

Some options I've been thinking over:

  • Going with an Investor or (a small investor, venture capital, angel funding etc.). I've already got about 3 investors contact and they are highly interested in the software, since the software is complete and making a demo such an easy task attracting new investors are really easy.

  • Going open-source (although it's not a too common software there is almost no open source alternative, so it should be well accepted), and building a business model over open source to make some bucks (support for companies, commercial license for enterprise solutions etc.)

  • Selling it by myself, Opening a web site, marketing it with my own budget, putting a decent price tag (cheaper then competitors) and selling it, maybe even giving it free for personal usage.

To be honest I solely focus on the money aspect of this.

All code written by me so I own all copyrights of the source code and there are no license issues regarding the libraries.

From stackoverflow
  • This is a very tough question to answer without knowing the specifics of the software, the target market, competitors and so forth.

    Bear in mind that no matter what course you take, most of the success of your application will depend on marketing and branding. For this reason, most people will never really make it alone, they will partner up with someone who can cater to the marketing/business development aspect of the product.

    An investor can sometimes fill that spot, and that's why many opt for that course. Investors are generally more business oriented than software developers, have good connections in many industries, and since they have a vested interested in your success (ie, their share of stock) they will have more motivation to push your product than hired marketing guys. However, once you raise funds you are no longer your own man as you acquire responsibilities towards your investor, which will usually try to push as fast as possible for exit opportunities.

    On the other hand, going open-source has its own marketing benefits, since other developers who find your project worthwhile may become evangelists of your product and help you gain traction. Also open-source seems by nature more altruistic which is always good for PR.

    Selling it by yourself is the most difficult course without someone very knowledgeable in the entire process - marketing, distribution, affiliates etc. For this course I would highly advise looking for a business-oriented partner that can help you drive the success of your product.

  • It depends on a couple of things. The product, the market, the clients etc.

    An investor might be overkill since you've already have built the product. You could use the money for marketing. If you're afraid competitors will catch up quickly and want to grab a lot of market share with a big launch this might be the way to go.

    Starting small is a good way to go if you dont want to invest a lot and dont mind spending some time building your business. This way you give competition more chance to catch up though, you'll want to think about what differentiates you from your competition. If it's only price then you might want to go for the big expensive launch because your competitors can easilly drop their prices to undercut yours. If your software does things that you competitors can't easilly repicate then the small start is a better idea.

    Open source is good for marketing and makes it easier to get more market-share without spending a lot of money. You'll have to find another way to make money like supporting your software. This might make sense if you've got a product that needs a lot of support, for example a server product where you can offer hosting like wordpress does. If you've built a client application that's easy to install and use this won't bring in much revenue.

    Not every type of customer will react positively to open source. Lots of big corporations still don't trust free software.

  • You have a beta-proven product in a real market, competiors exist, and with interested investors. Without knowing more and knowing that 'sales' is a different skill to developing, I would suggest finding 1 of the investors who can offer assistance in setting up the marketing and sales in your industry.

  • I think you should look more at yourself, than at your product. Do you consider yourself both motivated and skilled enough to be a small business owner? The skills you'll need are very different, than you already have as a programmer. How much do you know about finance and marketing?

    If you want to remain being a programmer: get a business partner. Let your business partner invest his (or her) time in your joint company, and let him do the business part.

  • You should do 3 right now (sell it yourself), as a preparation for doing 1 (getting investment) and avoid 2 (going opensource). The reason I say this is that you have stated that you are in it for the money - which is no bad thing.

    There are three clear advantages I see to this approach:

    1. Prove the revenue model
    You appear to have proven the application and the gap in the market you are exploiting, but you have not proven a revenue model, and that's what your prospective investor will look for. The best way to do that is to hone your proposition and go out and get it to earn some money for itself. You investors will be looking for that and how it can grow and scale as a business.

    2. Develop yourself as a businessperson and not just a techy
    The other good reason for selling it yourself is that you will gain a set of skills that you can never get sat in front of a computer and your programming day job will never give you. This is the single strongest argument for starting your own business in my view. If you can get to the position of being a really good techy and a really good businessman the world is your oyster.

    3. Drive a better deal from your investors
    When you do come to negotiate terms with your investors the less you need them the more they'll want you. The best possible situation to present to them is one where you need their money not to start or survive, but to grow. If you can start and survive on your own then you can drive a much harder bargain when it comes to giving up equity.

    The reason I say avoid open source is that your investors are likely to look on that as a risk and potentially a negative aspect of your approach. It is pretty hard to make a lot of money with that model, and investors want to make a lot of money. If you really want to go open source, then make it a clear part of your strategy and do it once you have a commercially viable going concern as a business. You'll also then be better position to know what is really valuable to you and what is not.

    Best of luck. I've been starting tech businesses for 20-odd years and the step you are standing on is the most daunting and the most exciting.

    Tall Jeff : This is great advice. Although, I don't think you'll really need investors at that point. Once the model is proven, growth money should is obtainable from a bank at better rates.
    Simon : Maybe... Banks don't just hand it out either - especially now - you need a solid business plan. What you don't get from a Bank is advice and expertise on the board. If you get good investors with experience in software they will also be great advisors with a mutually shared interest.
    tag : I agree. Investors not only bring you money you need, they introduce, most importantly, potential customers, expertise etc, and those thing are crucial to you yet bank can not give them to you. Simon, your answer is a awesome.
  • The most important thing you need now is a relationship with a successful software salesperson. You can work out what combination of equity and commission to give them, but it has been proven over and over that you can succeed with a great salesman and mediocre software, than great software that isn't sold professionally. If you can provide both, you're a lot closer to the gold.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :

log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp

This produces very verbose debug logs, for example:

2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled

This works great, and provides excellent debugging data.

However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.

Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?

From stackoverflow
  • You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.

    The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.

  • There are a couple of ways to go with this, although you have to figure out which one works better for your situation:

    1. Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave of the PID and the timestamp. You might only need the file name and line number.

    2. Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.

    3. Log to a database instead of a file. For your reports, select just the columns you want to inspect.

    4. Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.

How do I generate a compiled lambda with method calls?

I'm generating compiled getter methods at runtime for a given member. Right now, my code just assumes that the result of the getter method is a string (worked good for testing). However, I'd like to make this work with a custom converter class I've written, see below, "ConverterBase" reference that I've added.

I can't figure out how to add the call to the converter class to my expression tree.

    public Func<U, string> GetGetter<U>(MemberInfo info)
    {
        Type t = null;
        if (info is PropertyInfo) 
        {
            t = ((PropertyInfo)info).PropertyType;
        }
        else if (info is FieldInfo)
        {
            t = ((FieldInfo)info).FieldType;
        }
        else
        {
            throw new Exception("Unknown member type");
        }

        //TODO, replace with ability to specify in custom attribute
        ConverterBase typeConverter = new ConverterBase();

        ParameterExpression target = Expression.Parameter(typeof(U), "target");
        MemberExpression memberAccess = Expression.MakeMemberAccess(target, info);

        //TODO here, make the expression call "typeConverter.FieldToString(fieldValue)"

        LambdaExpression getter = Expression.Lambda(memberAccess, target);

        return (Func<U, string>)getter.Compile();
    }

I'm looking for what to put in the second TODO area (I can handle the first :)).

The resulting compiled lambda should take an instance of type U as a param, call the specified member access function, then call the converter's "FieldToString" method with the result, and return the resulting string.

From stackoverflow
  • You need to wrap the object in an ExpressionConstant, e.g. by using Expression.Constant. Here's an example:

    class MyConverter
    {
        public string MyToString(int x)
        {
            return x.ToString();
        }
    }
    
    static void Main()
    {
        MyConverter c = new MyConverter();
    
        ParameterExpression p = Expression.Parameter(typeof(int), "p");
        LambdaExpression intToStr = Expression.Lambda(
            Expression.Call(
                Expression.Constant(c),
                c.GetType().GetMethod("MyToString"),
                p),
            p);
    
        Func<int,string> f = (Func<int,string>) intToStr.Compile();
    
        Console.WriteLine(f(42));
        Console.ReadLine();
    }
    
    TheSoftwareJedi : Expression.Constant - FTW. Thanks. I'll try it out now and award the win if it's good. Thanks!
    TheSoftwareJedi : you missed my nested member access call, but that was easy enough to add to the tree. Thanks again
    Barry Kelly : I didn't miss it - I believed I saw your dilemma and addressed that specifically :) I work on compilers in my day job, so it was pretty clear.
    TheSoftwareJedi : You missed the cast, which leaves me split between giving you the answer or Marc... Given his included both the cast, call, and member access, I'm going to toss them his way. Cheers.
    Barry Kelly : You're welcome :)
    Marc Gravell : +1 from me too - good to see somebody else who can talk "Expression" ;-p
  • Can you illustrate what (if it was regular C#) you want the expression to evaluate? I can write the expression easily enough - I just don't fully understand the question...

    (edit re comment) - in that case, it'll be something like:

        ConverterBase typeConverter = new ConverterBase();
        var target = Expression.Parameter(typeof(U), "target");
        var getter = Expression.MakeMemberAccess(target, info);
        var converter = Expression.Constant(typeConverter, typeof(ConverterBase));
    
        return Expression.Lambda<Func<U, string>>(
        Expression.Call(converter, typeof(ConverterBase).GetMethod("FieldToString"),
            getter), target).Compile();
    

    Or if the type refuses to bind, you'll need to inject a cast/convert:

        MethodInfo method = typeof(ConverterBase).GetMethod("FieldToString");
        return Expression.Lambda<Func<U, string>>(
            Expression.Call(converter, method,
                Expression.Convert(getter, method.GetParameters().Single().ParameterType)),
                target).Compile();
    
    TheSoftwareJedi : Perfect. Works like a charm. I ran up against the cast problem, but came back here and saw this answer... Thanks!

ASP.NET - controls generated by xslt transformation

hi! i'm generating controls dynamically on my asp.net page by xslt transformation from an xml file. i will need to reference these controls from code behind later. i would like to add these references to the list/hashtable/whatever during creation (in xslt file i suppose) so that i could reach them later and i have no idea how to do this. i will be absolutely grateful for any suggestions, agnieszka

From stackoverflow
  • Can you give a better idea of what you are trying to do?

    XML > XSLT > produces aspx page

    Sounds close to reinventing the windows presentation framework or XUL

    Or is it ASPX reads xml > uses XSLT to add DOM elements to page... Sounds like AJAX

    You want to write out a unique ID using the attribute transform http://www.w3schools.com/XSL/el_attribute.asp

  • Could be tricky with a pure XSL solution.

    You might be able to call a template which iterates the xml nodes you are using generate the controls, and writes out a c#/VB script block which adds them to a container of your choice.

    Another option could be to add msxsl:script to your template, and use c# or another language to generate the output you want. This can sometimes be easier than a pure xsl solution, but does come with a performance cost.

    It might be worth having a look at the source code of umbraco, which utilises xsl pretty heavily, and possibly already does what you are looking for.

  • Once you have transformed your XML using XSLT, you could pass the output to the ASP.Net ParseControl method and it will return your controls ready to use. For example this code will place two buttons on the page:

    protected void Page_Load(object sender, EventArgs e)
    {
        // Fetch your XML here and transform it.  This string represents
        // the transformed output
        string content = @"
            <asp:Button runat=""server"" Text=""Hello"" />
            <asp:Button runat=""server"" Text=""World"" />";
    
        var controls = ParseControl(content);
    
        foreach (var control in controls)
        {
            // Wire up events, change settings etc here
        }
    
        // placeHolder is simply an ASP.Net PlaceHolder control on the page
        // where I would like the controls to end up
        placeHolder.Controls.Add(controls);
    }
    
    Rune Grimstad : Ha! Then I learned something new and useful from SO today as well! :-)
    James Campbell : I tried this and get: foreach statement cannot operate variable of type System.Web.UI.Control because System.Web.UI.Control does not contain a public definition for 'GetEnumerator'
  • Thanks for all the answers.

    This is what i do (it's not my code, but i'm doing it the same way):

    private void CreateControls() { XPathDocument surveyDoc = new XPathDocument(Server.MapPath("ExSurvey.xml"));

    // Load the xslt to do the transformations
    XslTransform transform = new XslTransform();
    transform.Load(Server.MapPath("MakeControls.xslt"));
    
    // Get the transformed result
    StringWriter sw = new StringWriter();
    transform.Transform(surveyDoc, null, sw);
    string result = sw.ToString();
    
    // parse the control(s) and add it to the page
    Control ctrl = Page.ParseControl(result);
    form1.Controls.Add(ctrl);
    

    }

    First solution (from Generic Error) is not good enough, because i need to identify the controls, for example during xslt transformation i will create 3 groups of controls, all having different ids. I would like to put references to each control from a group in different hashtable so that i would know later which controls are in each group.

    The best solution would be to do it somehow when creating a control (so in xslt code..) but i don't know if it's possible.

What does PermGen actually stand for?

I know what PermGen is, what it's used for, why it fails, how to increase it etc.

What I don't know is what PermGen actually stands for. Permanent... Gen... something?

Does anyone know what PermGen actually stands for?

From stackoverflow
  • Permanent Generation. See the java GC tuning guide for more details on the garbage collector.

  • Permanent Generation.

    The garbage collector is known as a Generational garbage collector. Long lived objects wind up in the Permanent Generation.

    Ivan Dubrov : This is not very true, I think. Permanent generation is for special kind of objects used by the JVM (class objects, method objects). Regular objects are never promoted to this generation, AFAIK.
    Calum : The above comment is correct; the permanent generation is for types that are considered to be "permanent", rather than just very long-lived. Of course this doesn't stop it being populated by classes, which can get collected if a classloader becomes unreachable.
    Tom Hawtin - tackline : Yup, this answer is talking about the tenured generation.
  • Permanent generation

  • If I remember correctly, the gen stands for generation, as in a generational garbage collector (that treats younger objects differently than mid-life and "permanent" objects). Principle of locality suggests that recently created objects will be wiped out first.

  • PermGen stands for Permanent Generation.

    Here is a brief blurb on DDJ

  • Good description from the guy who knows a lot about GC internals. There are a plenty of useful GC-related info in his blog, by the way.

  • Permanent Generation. Details are of course implementation specific.

    Briefly, it contains the Java objects associated with classes and interned strings. In Sun's client implementation whith sharing on, classes.jsa is memory mapped to form the initial data, with about half read-only and half copy-on-write.

    Java objects that are merely old are kept in the Tenured Generation.

  • Unfortunately, the answer chosen is wrong. PermGen is used by the JVM to hold loaded classes. You can increase it using:

    -XX:MaxPermSize=384m

    if you're using the Sun JVM

    So if you get an OutOfMemoryException: PermGen you need to either make PermGen bigger or you might be having class loader problems.

Why main() cannot be declared as a static in C ??

Why main must be declared as if it has external linkage?

Why it should not be static?

what is meant by external linkage??

From stackoverflow
  • Because you link the startup files to your program, which contains (usually) assembler code that calls your main. If main were static, that code wouldn't be able to call main.

    external linkage means that other so-called translation-units can see your symbol declared extern in its own translation-unit. So, your main is extern, and it will have an entry in its translation-units symbol table that states its address. Other translation-units will then be able to jump to that address when they want to call main.

    static linkage means your symbol is strictly translation-unit local. This means other translation units will not be able to see that symbol. Thus, symbols with static linkage can occur in different translation units multiple times, and they won't clash with each other because they are local.

    Edit: Generally, files generated by the compiler from translation units are specific to that particular compiler. For gcc on linux, often the ELF object format is used. You can view its symbol table using readelf -sW <file>.o (simple test-file below):

    test.c

    void bar(void);
    
    static int foo(void) {
        return 1;
    }
    
    int main(void) {
        bar();
        return foo();
    }
    

    Here is the output of readelf:

    Symbol table '.symtab' contains 10 entries:
       Num:    Value  Size Type    Bind   Vis      Ndx Name
         0: 00000000     0 NOTYPE  LOCAL  DEFAULT  UND
         1: 00000000     0 FILE    LOCAL  DEFAULT  ABS test.c
         2: 00000000     0 SECTION LOCAL  DEFAULT    1
         3: 00000000     0 SECTION LOCAL  DEFAULT    3
         4: 00000000     0 SECTION LOCAL  DEFAULT    4
         5: 00000000    10 FUNC    LOCAL  DEFAULT    1 foo
         6: 00000000     0 SECTION LOCAL  DEFAULT    6
         7: 00000000     0 SECTION LOCAL  DEFAULT    5
         8: 0000000a    36 FUNC    GLOBAL DEFAULT    1 main
         9: 00000000     0 NOTYPE  GLOBAL DEFAULT  UND bar
    

    You see the main function, and a static foo function, called by main. Also there is a function called which is not defined in the file, but which is defined in another object file. As the object file wasn't finally linked yet, the functions don't have final addresses assigned yet. After the final link, these will be arranged into the executable and will have addresses assigned. The object file has entries for calls to not-yet defined functions, so that when the file is linked, those call instructions can have the final addresses stored (readelf -r <file>.o):

    Relocation section '.rel.text' at offset 0x308 contains 1 entries:
     Offset     Info    Type            Sym.Value  Sym. Name
    0000001c  00000902 R_386_PC32        00000000   bar
    
    Manoj Doubts : how can we analyse those translation units.Can we see them in the form of tables or graphical forms for every file?if its not possible then in which compiler generated file or attribute file we can know the attribute of a function.?
    Barry Kelly : Your tools vendor will have a utility that does this. E.g. objdump (Unixy), dumpbin (MSFT), tdump (Borland/CodeGear/Embarcadero)
    Manoj Doubts : what is it for gcc?
    Manoj Doubts : ok ok thank you guys for your explanations.It helped me a lot to know
    mahesh : Awesome Answer. Liked it much
    quark : There's also 'nm' for Unixes. 'objdump' is more powerful, but you need to understand the options.
  • The real starting point of the code is buried in the C runtime library. This runtime library calls your main() routine. In order for the linker to connect the C RTL call with your main() function, it needs to be visible outside the file.

    External linkage is just this: it means that the name in question is visible as part of the exports of the object file. The job of the linker is to join up all of the imports and exports so that there are no outstanding imports.

Support for foreign key constraint in Rails

In Ruby on Rails, how to add foreign key constraint in migration?

From stackoverflow
  • AFAIK, there isn't any built-in support for that, but there are several plugins that help you with that. You can also add them manually to your migration files, just use the execute method for that, e.g. (sample from Rails API):

      class MakeJoinUnique < ActiveRecord::Migration
        def self.up
          execute "ALTER TABLE `pages_linked_pages` ADD UNIQUE `page_id_linked_page_id` (`page_id`,`linked_page_id`)"
        end
    
        def self.down
          execute "ALTER TABLE `pages_linked_pages` DROP INDEX `page_id_linked_page_id`"
        end
      end
    
  • http://blog.hasmanythrough.com/2007/1/15/basic-rails-association-cardinality

  • Here's a gem-based solution that includes support for adding and removing foreign key constraints, doesn't fail with sqlite, and works correctly with schema.rb files:

    http://github.com/matthuhiggins/foreigner

  • This is an update to the matthuhiggins-foreigner gem: http://github.com/sparkfly/foreigner

    Features:

    • rspec coverage, tested against PostgreSQL 8.3.9 and MySQL 5.0.90
    • Migration support
    • schema.rb support

    Future versions will include CHECK constraints for PostgreSQL, which is needed to implement multi-table inheritance.

Error adding policy file to GAC

Hi all I'm trying to add a publisher policy file to the gac as per this thread but I'm having problems when I try and add the file on my test server.

I get "A module specified in the manifest of assembly 'policy.3.0.assemblyname.dll' could not be found"

My policy file looks like this:

<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="*assemblyname*"
                          publicKeyToken="7a19eec6f55e2f84"
                          culture="neutral" />
        <bindingRedirect oldVersion="3.0.0.0"
                         newVersion="3.0.0.1"/>
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

Please help!

Thanks

Ben

From stackoverflow
  • Hi Ben,

    Ok...just want to check some basics....

    You definitely have got both versions of the dependant assembly installed to GAC?

    And have you verified that the version numbers in the [assembly: AssemblyVersion()] attribute are correct.

    And you did use [assembly: AssemblyVersion()] and NOT [assembly: AssemblyFileVersion("1.0.0.1")].

    Update: My mistake, you only need the latest version of the assembly in the GAC. I just tried that here and it works. My only other thoughts are to check that the public key tokens are the same and that you've not mispelled the assembly name.

    Also when you generate the policy file make sure you use the /version switch in the assembly linker to explicitly set the version number to 3.0.0.0 AND don't specify the /platform switch. e.g.

    al.exe /link:assembly.config /out:policy.3.0.assembly.dll 
             /keyfile:mykey.snk /version:3.0.0.0
    

    Cheers
    Kev

  • Hi Kev - thanks for replying No I've only got the target version of the assembly in the GAC (3.0.0.1) - but this works on my dev machine (well it installs to the GAC, not sure if it redirects ok yet). Do I have to have the non-used version in the GAC if I want to do this redirect?

    Assembly version and Assembly File version are both set to 3.0.0.1

    Kev : Actually it seems you only need the latest version of the assembly in the GAC, so apologies for that red herring.
    Kev : I added some more comments to my answer.
    Ben : no worries, cool thanks
  • I wasn't supplying the version before and was specifying the /platform switch - however I made the change to the linker command and its still the same problem :(

    I'm now calling it like this

    copy PublisherPolicy.xml ..\bin\Release
    C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\al.exe /link:..\bin\Release\PublisherPolicy.xml /out:..\bin\Release\policy.3.0.*assemblyname*.dll /keyfile:..\..\key.snk /version:3.0.0.0
    pause
    

    It still installs on my dev box - it must be something to do with the environment on the test server but I've no idea what it is. What's it looking for - what does it want to resolve?

  • I've recreated the problem from scratch with a new assembly that has no dependancies (apart from the defaults) itself - all works fine on my local development machine (and redirects fine too) but gives the same error adding the policy file to the GAC on the server!

    <configuration>
      <runtime>
        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
          <dependentAssembly>
            <assemblyIdentity name="TestAsm"
                              publicKeyToken="5f55456fdcc9b528"
                              culture="neutral" />
            <bindingRedirect oldVersion="3.0.0.0"
                             newVersion="3.0.0.1"/>
          </dependentAssembly>
        </assemblyBinding>
      </runtime>
    </configuration>
    

    linked in the following way

    C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\al.exe /link:PublisherPolicy.xml /out:policy.3.0.TestAsm.dll /keyfile:..\..\key.snk /version:3.0.0.0
    pause
    

    Please help!

  • Wow - ok got it.

    I should have paid more attention to exactly what this meant

    (MSDN) How to: Create a Publisher Policy

    Important Note: The publisher policy assembly cannot be added to the global assembly cache unless the original publisher policy file is located in the same directory as the assembly .

    That requirement is, frankly, so bizarre that it didn't register. The original policy file, that was compiled into the assembly i'm trying to add to the gac, has to be in the same folder as the policy assembly as you add the policy assembly.

  • How do you add it to the GAC if you were installing the app using Wise?

  • To add policy assemblies to the GAC using Wise, you do the same thing as you do to add the assembly the policy is for. So you add the policy assembly to the "Global Assembly Cache" in Wise, and as long as you have the policy file (.config) in the same location on the machine, Wise will automatically add it to GAC as well.