Tuesday, March 1, 2011

Change an element's onfocus handler with Javascript?

Hello,

I have a form that has default values describing what should go into the field (replacing a label). When the user focuses a field this function is called:

function clear_input(element)
{
    element.value = "";
    element.onfocus = null;
}

The onfocus is set to null so that if the user puts something in the field and decides to change it, their input is not erased (so it is only erased once). Now, if the user moves on to the next field without entering any data, then the default value is restored with this function (called onblur):

function restore_default(element)
{
    if(element.value == '')
    {
        element.value = element.name.substring(0, 1).toUpperCase()
                          + element.name.substring(1, element.name.length);
    }
}

It just so happened that the default values were the names of the elements so instead of adding an ID, I just manipulated the name property. The problem is that if they do skip over the element then the onfocus event is nullified with clear_input but then never restored.

I added

element.onfocus = "javascript:clear_input(this);";

In restore_default function but that doesn't work. How do I do this?

From stackoverflow
  • Use

    element.onfocus = clear_input;
    

    or (with parameters)

    element.onfocus = function () { 
        clear_input( param, param2 ); 
    };
    

    with

    function clear_input () {
        this.value = "";
        this.onfocus = null;
    }
    

    The "javascript:" bit is unnecessary.

  • I would suggest that you handle it a little differently. Instead of clearing the value, why not just highlight it all so that the user can just start typing to overwrite it. Then you don't need to restore the default value (although you could still do so and in the same way if the value is empty). You also can leave the handler in place since the text is not cleared, just highlighted. Use validation to make sure the value is not the original value of the input.

    function hightlight_input(element) {
        element.select();
    }
    
    function restore_default(element) // optional, do we restore if the user deletes?
    {
        if(element.value == '')
        {
            element.value = element.name.substring(0, 1).toUpperCase()
                              + element.name.substring(1, element.name.length);
        }
    }
    
  • It looks like you don't allow the fields to be empty, but what if the user puts a single or more spaces in the field? If you want to prevent this, you need to trim it. (See Steven Levithans blog for different ways to trim).

    function trim(str) {
        return str.replace(/^\s\s*/, '').replace(/\s\s*$/, '');
    }
    

    If you really want to capitalize the strings you could use:

    function capitalize(str) {
        return str.substr(0, 1).toUpperCase() + str.substr(1).toLowerCase();
    }
    

    By clearing the onfocus event you have created a problem that should not have been there. An easier solution is to just add an if-statement to the onfocus event, so it only clears if it is your default value (but I prefer to select it like tvanfosson suggested).

    I assume that you on your input-elements have set the value-property so that a value is shown in the input-elements when the page is displayed even if javascript is disabled. That value is available as element.defaultValue. Bonuses by using this approach:

    • You only define the default value in one place.
    • You no longer need to capitalize any value in your handlers.
    • The default value can have any case (like "John Y McMain")
    • The default value no longer needs to be the same as the name of the element.

    .

    function clear_default(element) {
        if (trim(element.value) == element.defaultValue ) { element.value = ""; }
    }
    
    function restore_default(element) {
        if (!trim(element.value).length) { element.value = element.defaultValue;}
    }
    
    José Leal : sehr gut! Keep up buddy =D
  • <!-- JavaScript
    function checkClear(A,B){if(arguments[2]){A=arguments[1];B=arguments[2]} if(A.value==B){A.value=""} else if(A.value==""){A.value="Search"}}
    //-->
    
    <form method="post" action="search.php">
    <input type="submit" name="1">
    <input type="text" name="srh" Value="Search" onfocus="checkClear(this,'Search')" onblur="checkClear(this,' ')">
    </form>
    

How does VC++ mangle local static variable names?

Here's some code I have:

MyClass* MyClass::getInstance()
{
   static MyClass instance;
   return &instance;
}

I want to look into this singleton's current values. But I'm currently paused three hours into execution, and the reason I'm paused is that I'm out of memory. So I can't put a breakpoint in this method there to see what the value is.

My question then is how to refer to this instance variable from a global scope. I've tried referring to it as MyClass::getInstance::instance but that doesn't work. I'm guessing getInstance has to be decorated somehow. Anyone know how?

This is in Visual Studio 2008.

From stackoverflow
  • In gdb, you can put a watchpoint on the mangled name of the variable.

    For example, with this function:

    int f() {
        static int xyz = 0;
        ++xyz;
    
        return xyz;
    }
    

    I can watch _ZZ1fvE3xyz (as mangled by gcc 3.2.3 or gcc 4.0.1).

    Owen : Awesome... That's the sort of thing I was looking for, only for the MS compiler. Good to know that it can be done.
  • That code just looks dangerous... :-)

    But anyway, your mangled name is going to depend on your Calling Convention So before you find your mangle name you need to know what your build environment is using as the calling convention. MSDN has a lot more information on calling convention.

    Besides this, one way to find out all this information about your class is to inspect your VTable, which is found in the first 4 bytes of your object. A nifty trick that reversers use is a hidden VC++ Flag reportSingleClassLayout that prints the class structure in an ASCII art manner.

  • Well, the function-scoped static instance variable doesn't show up in a .map file generated by cl.exe /Fm, and it doesn't show up when I use x programname!*MyClass* in WinDbg, so the mangled name doesn't seem to contain MyClass at all.

    Option 1: Disassemble MyClass::getInstance

    This approach seems easier:

    0:000> uf programname!MyClass::getInstance
    programname!MyClass::getInstance [programname.cpp @ 14]:
       14 00401050 55              push    ebp
       14 00401051 8bec            mov     ebp,esp
       15 00401053 a160b34200      mov     eax,dword ptr [programname!$S1 (0042b360)]
       15 00401058 83e001          and     eax,1
       15 0040105b 7526            jne     funcstat!MyClass::getInstance+0x33 (00401083)
    
    programname!MyClass::getInstance+0xd [programname.cpp @ 15]:
       15 0040105d 8b0d60b34200    mov     ecx,dword ptr [programname!$S1 (0042b360)]
       15 00401063 83c901          or      ecx,1
       15 00401066 890d60b34200    mov     dword ptr [programname!$S1 (0042b360)],ecx
       15 0040106c b9b0be4200      mov     ecx,offset programname!instance (0042beb0)
       15 00401071 e88fffffff      call    programname!ILT+0(??0MyClassQAEXZ) (00401005)
       15 00401076 68e03e4200      push    offset programname!`MyClass::getInstance'::`2'::`dynamic atexit destructor for 'instance'' (00423ee0)
       15 0040107b e8f3010000      call    programname!atexit (00401273)
       15 00401080 83c404          add     esp,4
    
    programname!MyClass::getInstance+0x33 [programname.cpp @ 16]:
       16 00401083 b8b0be4200      mov     eax,offset programname!instance (0042beb0)
       17 00401088 5d              pop     ebp
       17 00401089 c3              ret
    

    From this we can tell that the compiler called the object $S1. Of course, this name will depend on how many function-scoped static variables your program has.

    Option 2: Search memory for the object

    To expand on @gbjbaanb's suggestion, if MyClass has virtual functions, you might be able to find its location the hard way:

    • Make a full memory dump of the process.
    • Load the full memory dump into WinDbg.
    • Use the x command to find the address of MyClass's vtable:
        0:000> x programname!MyClass::`vftable'
        00425c64 programname!MyClass::`vftable' = 
    
    • Use the s command to search the process's virtual address space (in this example, 0-2GB) for pointers to MyClass's vtable:
        0:000> s -d 0 L?7fffffff 00425c64
        004010dc  00425c64 c35de58b cccccccc cccccccc  d\B...].........
        0040113c  00425c64 8bfc458b ccc35de5 cccccccc  d\B..E...]......
        0042b360  00425c64 00000000 00000000 00000000  d\B.............
    
    • Use the dt command to find the class's vtable offset, and subtract that from the addresses returned from the search. These are possible addresses for the object.
        0:000> dt programname!MyClass
           +0x000 __VFN_table      : Ptr32 
           +0x008 x                : Int4B
           +0x010 y                : Float
    
    • Use dt programname!MyClass 0042b360 to examine the object's member variables, testing the hypothesis that the object is located at 0042b360 (or some other address). You will probably get some false positives, as I did above, but by inspecting the member variables you may be able to figure out which one is your singleton.

    This is a general technique for finding C++ objects, and is kind of overkill when you could just disassemble MyClass::getInstance.

Understanding Dijkstra's Mozart programming style

I came across this article about programming styles, seen by Edsger Dijsktra. To quickly paraphrase, the main difference is Mozart attempted to figure everything out in his head before writing anything, while Beethoven made his decisions as he wrote the notes out on paper, creating many revisions along the way. With Mozart programming, version 1.0 would be the only version for software that should aim to work with no errors and maximum efficiency. Also, Dijkstra says software not at that level of refinement and stability should not be released to the public.

Based on his views, two questions. Is Mozart programming even possible? Would the software we write today really benefit if we adopted the Mozart style instead?

My thoughts. It seems, to address the increasing complexity of software, we've moved on from this method to things like agile development, public beta testing, and constant revisions, methods that define web development, where speed matters most. But when I think of all the revisions web software can go through, especially during maintenance, when often patches are applied over patches, to then be refined through a tedious refactoring process—the Mozart way seems very attractive. It would at least lessen those annoying software updates, e.g. Digsby, Windows, iTunes, etc., many the result of unforeseen vulnerabilities that require a new and immediate release.

Edit: Refer to the response below for a more accurate explanation of Dijsktra's views.

From stackoverflow
  • Well, we can't all be as good as Mozart, can we? Perhaps Beethoven programming is easier.

  • A classic story from Usenet, about a true programming Mozart.

    Real Programmers write in Fortran.

    Maybe they do now, in this decadent era of Lite beer, hand calculators and "user-friendly" software but back in the Good Old Days, when the term "software" sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not Fortran. Not RATFOR. Not, even, assembly language. Machine Code. Raw, unadorned, inscrutable hexadecimal numbers. Directly.

    Lest a whole new generation of programmers grow up in ignorance of this glorious past, I feel duty-bound to describe, as best I can through the generation gap, how a Real Programmer wrote code. I'll call him Mel, because that was his name.

    I first met Mel when I went to work for Royal McBee Computer Corp., a now-defunct subsidiary of the typewriter company. The firm manufactured the LGP-30, a small, cheap (by the standards of the day) drum-memory computer, and had just started to manufacture the RPC-4000, a much-improved, bigger, better, faster -- drum-memory computer. Cores cost too much, and weren't here to stay, anyway. (That's why you haven't heard of the company, or the computer.)

    I had been hired to write a Fortran compiler for this new marvel and Mel was my guide to its wonders. Mel didn't approve of compilers.

    "If a program can't rewrite its own code," he asked, "what good is it?"

    Mel had written, in hexadecimal, the most popular computer program the company owned. It ran on the LGP-30 and played blackjack with potential customers at computer shows. Its effect was always dramatic. The LGP-30 booth was packed at every show, and the IBM salesmen stood around talking to each other. Whether or not this actually sold computers was a question we never discussed.

    Mel's job was to re-write the blackjack program for the RPC-4000. (Port? What does that mean?) The new computer had a one-plus-one addressing scheme, in which each machine instruction, in addition to the operation code and the address of the needed operand, had a second address that indicated where, on the revolving drum, the next instruction was located. In modern parlance, every single instruction was followed by a GO TO! Put that in Pascal's pipe and smoke it.

    Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the "read head" and available for immediate execution. There was a program to do that job, an "optimizing assembler", but Mel refused to use it.

    "You never know where it's going to put things", he explained, "so you'd have to use separate constants".

    It was a long time before I understood that remark. Since Mel knew the numerical value of every operation code, and assigned his own drum addresses, every instruction he wrote could also be considered a numerical constant. He could pick up an earlier "add" instruction, say, and multiply by it, if it had the right numeric value. His code was not easy for someone else to modify.

    I compared Mel's hand-optimized programs with the same code massaged by the optimizing assembler program, and Mel's always ran faster. That was because the "top-down" method of program design hadn't been invented yet, and Mel wouldn't have used it anyway. He wrote the innermost parts of his program loops first, so they would get first choice of the optimum address locations on the drum. The optimizing assembler wasn't smart enough to do it that way.

    Mel never wrote time-delay loops, either, even when the balky Flexowriter required a delay between output characters to work right. He just located instructions on the drum so each successive one was just past the read head when it was needed; the drum had to execute another complete revolution to find the next instruction. He coined an unforgettable term for this procedure. Although "optimum" is an absolute term, like "unique", it became common verbal practice to make it relative: "not quite optimum" or "less optimum" or "not very optimum". Mel called the maximum time-delay locations the "most pessimum".

    After he finished the blackjack program and got it to run, ("Even the initializer is optimized", he said proudly) he got a Change Request from the sales department. The program used an elegant (optimized) random number generator to shuffle the "cards" and deal from the "deck", and some of the salesmen felt it was too fair, since sometimes the customers lost. They wanted Mel to modify the program so, at the setting of a sense switch on the console, they could change the odds and let the customer win.

    Mel balked. He felt this was patently dishonest, which it was, and that it impinged on his personal integrity as a programmer, which it did, so he refused to do it. The Head Salesman talked to Mel, as did the Big Boss and, at the boss's urging, a few Fellow Programmers. Mel finally gave in and wrote the code, but he got the test backwards, and, when the sense switch was turned on, the program would cheat, winning every time. Mel was delighted with this, claiming his subconscious was uncontrollably ethical, and adamantly refused to fix it.

    After Mel had left the company for greener pa$ture$, the Big Boss asked me to look at the code and see if I could find the test and reverse it. Somewhat reluctantly, I agreed to look. Tracking Mel's code was a real adventure.

    I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.

    Perhaps my greatest shock came when I found an innocent loop that had no test in it. No test. None. Common sense said it had to be a closed loop, where the program would circle, forever, endlessly. Program control passed right through it, however, and safely out the other side. It took me two weeks to figure it out.

    The RPC-4000 computer had a really modern facility called an index register. It allowed the programmer to write a program loop that used an indexed instruction inside; each time through, the number in the index register was added to the address of that instruction, so it would refer to the next datum in a series. He had only to increment the index register each time through. Mel never used it.

    Instead, he would pull the instruction into a machine register, add one to its address, and store it back. He would then execute the modified instruction right from the register. The loop was written so this additional execution time was taken into account -- just as this instruction finished, the next one was right under the drum's read head, ready to go. But the loop had no test in it.

    The vital clue came when I noticed the index register bit, the bit that lay between the address and the operation code in the instruction word, was turned on-- yet Mel never used the index register, leaving it zero all the time. When the light went on it nearly blinded me.

    He had located the data he was working on near the top of memory -- the largest locations the instructions could address -- so, after the last datum was handled, incrementing the instruction address would make it overflow. The carry would add one to the operation code, changing it to the next one in the instruction set: a jump instruction. Sure enough, the next program instruction was in address location zero, and the program went happily on its way.

    I haven't kept in touch with Mel, so I don't know if he ever gave in to the flood of change that has washed over programming techniques since those long-gone days. I like to think he didn't. In any event, I was impressed enough that I quit looking for the offending test, telling the Big Boss I couldn't find it. He didn't seem surprised.

    When I left the company, the blackjack program would still cheat if you turned on the right sense switch, and I think that's how it should be. I didn't feel comfortable hacking up the code of a Real Programmer.

    andy.gurin : Nice story! I have really enjoyed it, thanks!
    CesarB : You forgot the title of the story: "The Story of Mel". See http://en.wikipedia.org/wiki/Mel_Kaye for some more details.
  • If Apple adopted "Mozart" programming, there would be no Mac OS X or iTunes today.

    If Google adopted "Mozart" programming, there would be no Gmail or Google Reader.

    If SO developers adopted "Mozart" programming, there would be no SO today.

    If Microsoft adopted "Mozart" programming, there would be no Windows today (well, I think that would be good).

    So the answer is simply NO. Nothing is perfect, and nothing is ever meant to be perfect, and that includes software.

    hlfcoding : Agreed on the Microsoft.
  • The Mozart programming style is a complete myth (everybody has to edit and modify their initial efforts), and although "Mozart" is essentially a metaphor in this example, it's worth noting that Mozart was substantially a myth himself.

    Mozart was a supposed magical child prodigy who composed his first sonata at 4 (he was actually 6, and it sucked - you won't ever hear it performed anywhere). It's rarely mentioned, of course, that his father was considered Europe's greatest music teacher, and that he forced all of his children to practice playing and composition for hours each day as soon as they could pick up an instrument or a pen.

    Mozart himself was careful to perpetuate the illusion that his music emerged whole from his mind by destroying most of his drafts, although enough survive to show that he was an editor like everyone else. Beethoven was just more honest about the process (maybe because he was deaf and couldn't tell if anyone was sneaking up on him anyway).

    I won't even mention the theory that Mozart got his melodies from listening to songbirds. Or the fact that he created a system that used dice to randomly generate music (which is actually pretty cool, but might also explain how much of Mozart's music appeared to come from nowhere).

    The moral of the story is: don't believe the hype. Programming is work, followed by more work to fix the mistakes you made the first time around, followed by more work to fix the mistakes you made the second time around, and so on and so forth until you die.

    John Nolan : Wow Mozart used dice to compose! Trivia for the day.
    MusiGenesis : The music it created was about as bad as his first sonata. My utterly-unverifiable theory was that he kept improving it but kept it secret (for obvious reasons). I've always found Mozart's music to be pretty formulaic - this would certainly explain that.
    alexmeia : Great answer, but the last sentence is so sad.
    MusiGenesis : I thought it was kind of optimistic, actually. I hope I can program for the rest of my life, and if there were actual Mozarts running around the profession, that might not happen. :)
    Cheeso : Wery Wery interesting about Mozart-the-Myth. Everything I know about Mozart, I learned watching the movie, "Amadeus".
  • Progress in computing is worth a sacrifice in glory or genius here and there.

  • I think it's possible to appear to employ Mozart programming. I know of one company, Blizzard, that doesn't release a software product until they're good and ready. This doesn't mean that Diablo 3 will spring whole and complete from someone's mind in one session of dazzlingly brilliant coding. It does mean that that's how it will appear to the rest of us. Blizzard will test the heck out of their game internally, not showing it to the rest of the world until they've got all the kinks worked out. Most companies don't take this approach, preferring instead to release software when it's good enough to solve a problem, then fix bugs and add features as they come up. This approach works (to varying degrees) for most companies.

    mlvljr : +1; reminds of "A rational design process: How and why to fake it" (http://web.cs.wpi.edu/~gpollice/cs3733-b05/Readings/FAKE-IT.pdf)
  • It doesn't scale.

    I can figure out a line of code in my head, a routine, and even a small program. But a medium program? There are probably some guys that can do it, but how many, and how much do they cost? And should they really write the next payroll program? That's like wasting Mozart on muzak.

    Now, try to imagine a team of Mozarts. Just for a few seconds.


    Still it is a powerful instrument. If you can figure out a whole line in your head, do it. If you can figure out a small routine with all its funny cases, do it.

    On the surface, it avoids going back to the drawing board because you didn't think of one edge case that requires a completely different interface altogether.

    The deeper meaning (head fake?) can be explained by learning another human language. For a long time you thinking which words represent your thoughts, and how to order them into a valid sentence - that transcription costs a lot of foreground cycles.
    One day you will notice the liberating feeling that you just talk. It may feel like "thinking in a foregin language", or as if "the words come naturally". You will sometimes stumble, looking for a particular word or idiom, but most of the time translation runs in the vast ressources of the "subconcious CPU".


    The "high goal" is developing a mental model of the solution that is (mostly) independent of the implementation language, to separate solution of a problem from transcribing the problem. Transcription is easy, repetetive and easily trained, and abstract solutions can be reused.

    I have no idea how this could be taught, but "figuring out as much as possible before you start to write it" sounds like good programming practice towards that goal.

    omouse : Dijkstra said something about that too if you bothered to read anything he wrote. He said we divide things up into smaller pieces and understand each piece and then the whole. Most programmers and computing scientists can barely handle the small pieces!
  • Edsger Dijkstra discusses his views on Mozart vs Beethoven programming in this YouTube video entitled "Discipline in Thought".

    alt text

    People in this thread have pretty much discussed how Dikstra's views are impractical. I'm going to try and defend him some.

    • Dijkstra is against companies essentially "testing" their software on their customers. Releasing version 1.0 and then immediately patch 1.1. He felt that the program should be polished to a degree that "hotfix" patches are borderline unethical.
    • He did not think that software should be written in one fell swoop or that changes would never need to be made. He often discusses his design ideals, one of them being modularity and ease of change. He often thought that individual algorithms should be written in this way however, after you have completely understood the problem. That was part of his discipline.
    • He found after all his extensive experience with programmers, that programmers aren't happy unless they are pushing the limits of their knowledge. He said that programmers didn't want to program something they completely and 100% understood because there was no challenge in it. Programmers always wanted to be on the brink of their knowledge. While he understood why programmers are like that he stated that it wasn't representative of low-error tolerance programming.

    There are some industries or applications of programming that I believe Dijkstra's "discipline" are warranted as well. NASA Rovers, Health Industry embedded devices (ie dispense medication, etc), certain Financial software that transfer our money. These areas don't have the luxuries of incremental change after release and a more "Mozart Approach" is necessary.

    hlfcoding : agreed. case in point: Digsby
    Adam Bernier : Great answer. Your third point is a quite valuable thing to remain aware of and guard against.
    Curt Sampson : Could this be paraphrased as, "experimentation and 'playing around' is fine so long as you use this to figure out what you are going later to deliver, rather that deliver the experiment? Not that any kind of experimentation is on the level of, "deliver something that will fool the customer for a while...."
    Cheeso : There is computer science and there is software delivery, and those two are not the same. Testing software on customers seems to have evolved as a necessary (and evil) part of the current market. Beautiful, efficient, optimized algorithms may be necessary, but are not sufficient, for successful software products.
    RCIX : Well, you have very good points. Mozart programming has it's use, but that's not to say that it's 100% bad to release something that may have bugs in it (in non-critical applications) for two reasons: 1, you have to get it out the door sometime, and 2, users are perhaps the most effective bug-finding tools in existence :)
  • I think the Mozart story confuses what gets shipped versus how it is developed. Beethoven did not beta-test his symphonies on the public. (It would be interesting to see how much he changed any of the scores after the first public performance.)

    I also don't think that Dijkstra was insisting that it all be done in your head. After all, he wrote books on disciplined programming that involved working it out on paper, and to the same extent that he wanted to see mathematical-quality discipline, have you noticed how much paper and chalk board mathematicians may consume while working on a problem?

    I favor Simucal's response, but I think the Mozart-Beethoven metaphor should be discarded. That shoe-horns Dijkstra's insistence on discipline and understanding into a corner where it really doesn't belong.

    Additional Remarks:

    The TV popularization is not so hot, and it confuses some things about musical composition and what a composer is doing and what a programmer is doing. In Dijkstra's own words, from his 1972 Turing Award Lecture: "We must not forget that it is not our business to make programs; it is our business to design classes of computations that will display a desired behavior." A composer may be out to discover the desired behavior.

    Also, in Dijkstra's notion that version 1.0 should be the final version, we too easily confuse how desired behavior and functionality evolve over time. I believe he oversimplifies in thinking that all future versions are because the first one was not thought out and done rigorously and reliably.

    Even without time-to-market urgency, I think we now understand much better that important kinds of software evolve along with the users experience with it and the utilitarian purpose they have for it. Obvious counter-examples are games (also consider how theatrical motion pictures are developed). Do you think Beethoven could have written Symphony No. 9 without all of his preceding experience and exploration? Do you think the audience could have heard it for what it was? Should he have waited until he had the perfect Sonata? I'm sure Dijkstra doesn't propose this, but I do think he goes too far with Mozart-Beethoven to make his point.

    In addition, consider chess-playing software. The new versions are not because the previous ones didn't play correctly. It is about exploiting advances in chess-playing heuristics and the available computer power. For this and many other situations, the idea that version 1.0 be the final version is off base. I understand that he is rightfully objecting to the release of known-to-be unreliable and maybe impaired software with deficiencies to be made up in maintenance and future releases. But the Mozartian counter-argument doesn't hold up for me.

    So, did Dijkstra continue to drive the first automobile he purchased, or clones of exactly that automobile? Maybe there is planned obsolescence, but a lot of it has to do with improvements and reliability that could not have possibly been available or even considered in previous generations of automotive technology.

    I am a big Dijkstra fan, but I think the Mozart-Beethoven thing is way too simplistic as well as inappropriate. I am a big Beethoven fan too.

    Simucal : I agree. I don't think Dijkstra was anti-incremental change. He was anti-releasing unfinished/poor software. I think his Mozart/Beethoven analogy is taken out of context. He didn't want me to hack/slash at problems and instead methodically understand and design stable software.
    MusiGenesis : Great line: "Beethoven did not beta-test his symphonies on the public".
  • I think the idea is to plan ahead. You need to at least have some kind of outline of what you are trying to do and how you plan to get there. If you just sit down at the keyboard and hope "the muse" will lead you to where your program needs to go, the results are liable to be rather uneven, and it will take you much longer to get there.

    This is true with any kind of writing. Very few authors just sit down at a typewriter with no ideas and start banging away until a bestselling novel is produced. Heck, my father-in-law (a high school English teacher) actually writes outlines for his letters.

Javascript invalidated after dom manipulation

My page has two components that depend on Javascript. On the left hand side, there is a attribute base navigation (abn). And the right hand side is the Result of the abn.

There is an event handler for abn and an event handler for the result (both are on clicks) Whenever the user clicks on the abn, I do an ajax call which returns a json object where the html result is a value of one of the key/value pair. The html is being inserted into the result component.

The event handler for the result of the page works fine on a page refresh. It stops working when I insert the html content into the result slot after the ajax call. I have verified that the result has all the divs and class that my javascript depends on.

I think when I replaced the html content, the javascript handler just stop working. Can someone explain why this is happening and how I can solve this?

From stackoverflow
  • Did you insert an element with the same id (duplicate id)?

  • how are you replacing the html content of the result? my guess is that you have the event handler defined when the page loads, but you are overwriting the dom element with a new dom element which does not have the event handler. But I'd have to see some of the code or get more of an explanation before I know more :-)

  • They are class, not div. And they have the same class. The event handler is not part of the result. The javascript is at the bottom of the page while the Result is at the top. So the javascript is sitll there. They are not being replaced.

    Paul Whelan : Post some code until then its only us guessing
    roenving : Yeah, lets see some code ... Quizzes are fun, but not here !-)
  • When you update the HTML with new data do you re-attach any event handlers that you had previously?

    It's possible that you've replaced the element which previously had an event handler, and now does not.

  • I've seen similar behavior where manipulating the DOM with innerHTML zaps event handlers that had been previously setup, although only when you replace the actual elements that had the handlers attached. This is true for inline event handler attributes, as well as "proper" event handlers hooked up via Javascript.

    The best way to test if this is happening would be to add some debug statements to the function that's called when you click on the abn. If you're not using a debugger, just add some alerts (and then look into using a JavaScript debugger)

    function iAmCalledWhenSomeoneClicksOnAbn(){
        alert("I was called, w00t!");
        //...rest of function
    }
    

    The first click (that works) will give you an alert. If the second click (that doesn't work) skips the alert, you know that your DOM manipulations are removing your event handlers.

    If the second click still gives you the alert, there's something else going on. The most likely culprit is an un-handled Javascript exception that's halting execution. Download a debugger for your browser (Firefox/Firebug, IE/Visual Studio Express Web Developer, Safari/Drosera) and following the execution path until the exception is thrown, or you get to the portion of your code where the DOM manipulation should happen. If you reach the later, inspect the contents of all the variables, as well as the current contents of the DOM to determine why the expected DOM manipulation isn't happening.

  • They are class, not div. And they have the same class. The event handler is not part of the result. The javascript is at the bottom of the page while the Result is at the top. So the javascript is sitll there. They are not being replaced.

    If you are replacing elements with new elements via .innerHTML, the JavaScript at the bottom of the page will not be re-executed when you do so. If you are adding event handlers using JavaScript (Level 2) rather than as html attributes (Level 0) then those handlers are only added when the user first visits the page. You need to call that code every time you place the new DOM elements on the page.

    Of course, this answer may be totally off the mark. We could tell if you gave us a code sample.

MySQL - A difficult INSERT...SELECT on the same table [MySQL 5.051]

I am trying to insert a new row into my table which holds the same data as the one I am trying to select from the same table but with a different user_id and without a fixed value for auto_id since that is an auto_increment field, and setting ti to NOW(). Below is my mockup query where '1' is the new user_id. I have been trying many variations but am still stuck, anyone who can help me with turning this into a working query.

INSERT INTO `lins` ( `user_id` , `ad` , `ke` , `se` , `la` , `ra` , `ty` , `en` , `si` , `mo` , `ti` , `de` , `re` , `ti` ) (

SELECT '1', `ad` , `ke` , `se` , `la` , `ra` , `ty` , `en` , `si` , `mo` , `ti` , `de` , `re` , NOW( )
FROM `lins`
WHERE autoid = '4'
AND user_id = '2'
)

Thank you for taking the time to help me out!

From stackoverflow
  • I just tried it, and it works for me.

    Did you misspell the column name corresponding to "ke"?

    I'm guessing the actual column names in your table are not just two letters. E.g. you have two columns named "ti" in the query you show, so I assume you have edited these down from longer names.

  • Presumably you know that "WHERE autoid = 4" is sufficient if it's a unique autoid. And if you intended to single-quote the integers (1, 4 and 2), and they're numeric, you've created implied casts; in the WHERE clause, that will disable the ability of the optimizer to use indexes on the resulting integer values.

    Also, using unnecessary (and easily mistyped) back-ticks has been the root of at least one other similar question here.

What constitutes a good memory profile?

In designing any desktop applications, are there any general rules on how much memory should the application uses?

For heavy-weight applications, those can be easily understood or at least profiled such as Firefox or Google Chrome. But for smaller utilities or line-of-business application, how much is an acceptable amount of memory usage?

I asked because I've recently come across a trade-off between memory usage and performance and wonder if there is any general consensus regarding it?

EDIT: Platform is Windows XP for users with machine just capable of running rich internet applications.

My specific trade-off problem is about caching a lot of images in memory. If possible, I'd love to have my app cache as much as the user's memory will allow. I have done it so that the application will cache upto a certain maximum limit considering memory pressure at the moment..

But what would be a good number? How do you come up with one? That's the point I'm asking.

From stackoverflow
  • This depends on your target PC hardware. If your application uses too much memory then it will be slow while windows pages. TEST! Try both options in your compromise, and some in between if it makes sense. Run the tests on a typical machine that your users would use and with a sensible number of other applications open. So for most people that is Outlook and probably an instance or 2 of Internet Explorer (or the mail client/browser of your choice). I work in an organistion where uses of my application are also likely to be running some other custom applications so we test with those running as well. We have found that our application used too much memory, and makes switching application painfully slow so we have slowed our application slightly to reduce its memory usage. If you are interested our target hardware was originally 512Mb machines becuase that was what our common standard spec workstation was. Several PC's had to be upgraded to 1Gb though becuase of this application. We have now trimmed its RAM usage a bit but it is written in VB .NET and most of the memory used seems to be the framework. PerfMon says the process is using aroung 200Mb (peak) but that the managed heap is only around 2Mb!

  • This depends entirely on your target platform, which is more or less a business decision. The more memory you will need, the less customers will be able to use your software. Some questions to ask: How much memory do your customers (or potential customers) have installed in their computers? What other applications will they run simultaneously with your application? Is your application something assumed to be running exclusively (like a full screen computer game), or a utility which is supposed to run mostly in background, or to be switched into it from other applications often?

    Hear is one example of a survey showing a distribution of installed RAM in systems of people playing games via Steam (source: Valve - Survey Summary Data):

    • Less than 96 Mb 0.01 %
    • 96 Mb to 127 Mb 0.01 %
    • 128 Mb to 255 Mb 0.21 %
    • 256 Mb to 511 Mb 5.33 %
    • 512 Mb to 999 Mb 19.81 %
    • 1 Gb to 1.49 Gb 30.16 %
    • 1.5 Gb to 1.99 Gb 6.10 %
    • 2.0 Gb 38.37 %

    A conclusion I would draw from a survey like this in my domain (computer games) is I can reasonably expect almost all our users having 512 MB or more, and vast majority having 1 GB or more. For a computer game which is supposed to run exclusive this means working set around 400 MB is rather safe and will limit almost no one out, and if it provides a significant added value for the product, it may have a sense to have a working set around 800 MB.

    chakrit : It's a utility that will run mostly in background.
  • There is no absolute answer for this. It depends on too many variables.

    Here are some trade-offs for consideration:

    • What device/platform are you developing for?
    • Do you expect your user to use this software as the main purpose for their computer (example maybe you are developing some kind of server software)
    • Who is your target audience, home users? power users?
    • Are you making realistic expectations for the amount of RAM a user will have?
    • Are you taking into consideration that the user will be using a lot of other software on that computer as well?

    Sometimes it's possible to have your cake and eat it too. For example if you were reading a file and writing it back out, you could read it chunk by chunk instead of reading the whole file into memory and then writing it out. In this case you have better memory usage, and no speed decrease.

    I would generally recommend to use more RAM to get better speed if you must. But only if the RAM requirements are realistic for your target audience. For example if you expect a home user that has 1GB of RAM to use your program, then don't use 600MB of RAM yourself.

    Consider using more RAM in this instance to get better speed, and to optimize some other part of your code to use less RAM.

    Edit:

    About your specific situation of caching images. I think it would be best for you to allow the user to set the amount of caching they would like to perform as an option. That way people with a lot of RAM can put it higher for better performance, and the people with little RAM can set it low.

Cause of a memory leak in C++ when using the Boehm GC

This code is causing a memory leak for me, and I'm not sure why.

[EDIT] Included code from here into question:

#include "src/base.cpp"

typedef std::map<std::string, AlObj*, std::less<std::string>, 
  gc_allocator<std::pair<const std::string, AlObj*> > > KWARG_TYPE;

AlInt::AlInt(int val)   {
    this->value = val;
    this->setup();
}

// attrs is of type KWARG_TYPE
void AlInt::setup() {
    this->attrs["__add__"] = new AddInts();
    this->attrs["__sub__"] = new SubtractInts();
    this->attrs["__mul__"] = new MultiplyInts();
    this->attrs["__div__"] = new DivideInts();
    this->attrs["__pow__"] = new PowerInts();
    this->attrs["__str__"] = new PrintInt();
}

int main() {
    while (true) {
        AlObj* a = new AlInt(3);
    }
}

AlInt inherits from AlObj, which in turn inherits from gc. When I comment out the contents of setup() then I don't have a memory leak, this leads me to believe the issue is with the map not cleaning up, however I'm using the gc allocator, so I'm not sure where to look next. Thoughts?

From stackoverflow
  • The 'gc allocator' is allocating and looking after objects of this type:

    std::pair<const std::string, AlObj*>
    

    Just because this object has a pointer in it does not mean it the allocator will call delete on it.

    If you want the object created in setUp() to be GC then you need to allocate them via the GC. Or learn to use boost:ptr_map or shared_ptr.

    A map destroys (not deletes) the object it owns. In this case it owns the pointer not what the pointer points at. So when the map is destroyed it deallocates everything associated with the map and the object it owns (for pointers this means it does nothing).

    If you have a map (or other container) that contains pointers. You must manually delete the pointers otherwise there will be a memory leak. Alternatively you can use boost::ptr_map or a map that contains a share_ptr

    Alex Gaynor : So I need a destructor on AlObj, that just does delete attrs? Also, here's *all* the code: http://github.com/alex/alex-s-language/tree/local%2Fcpp-generation/alexs_lang/cpp if someone wants to take a peak.
    Martin York : Or use a boost::ptr_map
    Leon Timmermans : I second Martin's suggestion
    Alex Gaynor : Isn't the point of using the Boehm GC allocator that it will delete what the pointer points to when the map is destroyed?
    Don Wakefield : Not quite. 'delete' will invoke the destructor of an object (which may release resources other than memory, file handles, for instance). GC will simply reclaim memory which is no longer pointed to by any live pointers in your app.
    Alex Gaynor : So essentially I should create a destructor that iterates over attrs, and deletes each item?
  • Actually I'm not sure I follow, AlObj has a member attrs which is a map, not a pointer to a map, so how would I deallocate it?

  • Essentially what I'm saying is, the AlObj seems to be destructed, but not it's members(since it doesn't leak unless I put stuff in attrs).

  • The allocator is deleting your pairs. But deleting a pair doesn't delete members of the pair that happen to be pointers.

    Jon Harrop : But surely it makes the members of the pair unreachable so they will be eligible for garbage collection?
  • Ok so it sounds like I need to delete the pointers myself, but what if the pointer points to something that's also being pointed at elsewhere, I don't want to actually free the memory in that case, which I think was the point of the GC.

Choosing when to instantiate classes

I recently wrote a class for an assignment in which I had to store names in an ArrayList (in java). I initialized the ArrayList as an instance variable private ArrayList<String> names. Later when I checked my work against the solution, I noticed that they had initialized their ArrayList in the run() method instead.

I thought about this for a bit and I kind of feel it might be a matter of taste, but in general how does one choose in situations like this? Does one take up less memory or something?

PS I like the instance variables in Ruby that start with an @ symbol: they are lovelier.

(meta-question: What would be a better title for this question?)

From stackoverflow
  • In the words of the great Knuth "Premature optimization is the root of all evil".

    Just worry that your program functions correctly and that it does not have bugs. This is far more important than an obscure optimization that will be hard to debug later on.

    But to answer your question - if you initialize in the class member, the memory will be allocated the first time a mention of your class is done in the code (i.e. when you call a method from it). If you initialize in a method, the memory allocation occurs later, when you call this specific method.

    So it is only a question of initializing later... this is called lazy initialization in the industry.

    Vinko Vrsalovic : I'm not sure if it's just lazy initialization or using an instance variable instead of a local variable.
  • From wikibooks:

    There are three basic kinds of scope for variables in Java:

    • local variable, declared within a method in a class, valid for (and occupying storage only for) the time that method is executing. Every time the method is called, a new copy of the variable is used.

    • instance variable, declared within a class but outside any method. It is valid for and occupies storage for as long as the corresponding object is in memory; a program can instantiate multiple objects of the class, and each one gets its own copy of all instance variables. This is the basic data structure rule of Object-Oriented programming; classes are defined to hold data specific to a "class of objects" in a given system, and each instance holds its own data.

    • static variable, declared within a class as static, outside any method. There is only one copy of such a variable no matter how many objects are instantiated from that class.

    So yes, memory consumption is an issue, especially if the ArrayList inside run() is local.

  • I am not completely I understand your complete problem.

    But as far as I understand it right now, the performance/memory benefit will be rather minor. Therefore I would definitely favour the easibility side.

    So do what suits you the best. Only address performance/memory optimisation when needed.

  • Initialization

    As a rule of thumb, try to initialize variables when they are declared.

    If the value of a variable is intended never to change, make that explicit with use of the final keyword. This helps you reason about the correctness of your code, and while I'm not aware of compiler or JVM optimizations that recognize the final keyword, they would certainly be possible.

    Of course, there are exceptions to this rule. For example, a variable may by be assigned in an if–else or a switch. In a case like that, a "blank" declaration (one with no initialization) is preferable to an initialization that is guaranteed to be overwritten before the dummy value is read.

    /* DON'T DO THIS! */
    Color color = null;
    switch(colorCode) {
      case RED: color = new Color("crimson"); break;
      case GREEN: color = new Color("lime"); break;
      case BLUE: color = new Color("azure"); break;
    }
    color.fill(widget);
    

    Now you have a NullPointerException if an unrecognized color code is presented. It would be better not to assign the meaningless null. The compiler would produce an error at the color.fill() call, because it would detect that you might not have initialized color.

    In order to answer your question in this case, I'd have to see the code in question. If the solution initialized it inside the run() method, it must have been used either as temporary storage, or as a way to "return" the results of the task.

    If the collection is used as temporary storage, and isn't accessible outside of the method, it should be declared as a local variable, not an instance variable, and most likely, should be initialized where it's declared in the method.

    Concurrency Issues

    For a beginning programming course, your instructor probably wasn't trying to confront you with the complexities of concurrent programming—although if that's the case, I'm not sure why you were using a Thread. But, with current trends in CPU design, anyone who is learning to program needs to have a firm grasp on concurrency. I'll try to delve a little deeper here.

    Returning results from a thread's run method is a bit tricky. This method is the Runnable interface, and there's nothing stopping multiple threads from executing the run method of a single instance. The resulting concurrency issues are part of the motivation behind the Callable interface introduced in Java 5. It's much like Runnable, but can return a result in a thread-safe manner, and throw an Exception if the task can't be executed.

    It's a bit of a digression, but if you are curious, consider the following example:

    class Oops extends Thread { /* Note that thread implements "Runnable" */
    
      private int counter = 0;
    
      private Collection<Integer> state = ...;
    
      public void run() {
        state.add(counter);
        counter++;
      }
    
      public static void main(String... argv) throws Exception {
        Oops oops = new Oops();
        oops.start();
        Thread t2 = new Thread(oops); /* Now pass the same Runnable to a new Thread. */
        t2.start(); /* Execute the "run" method of the same instance again. */
        ...
      }
    }
    

    By the end of the the main method you pretty much have no idea what the "state" of the Collection is. Two threads are working on it concurrently, and we haven't specified whether the collection is safe for concurrent use. If we initialize it inside the thread, at least we can say that eventually, state will contain one element, but we can't say whether it's 0 or 1.

    Zarkonnen : Not sure if the poster is actually using a Thread. The method he's using just might happen to be called "run"?
    erickson : Good point, could be...
  • My personal rule of thumb for instance variables is to initialize them, at least with a default value, either:

    1. at delcaration time, i.e.

      private ArrayList<String> myStrings = new ArrayList<String>();

    2. in the constructor

    If it's something that really is an instance variable, and represents state of the object, it is then completely initialized by the time the constructor exits. Otherwise, you open yourself to the possibility of trying to access the variable before it has a value. Of course, that doesn't apply to primitives where you will get a default value automatically.

    For static (class-level) variables, initialize them in the declaration or in a static initializer. I use a static initializer if I have do calculations or other work to get a value. Initialize in the declaration if you're just calling new Foo() or setting the variable to a known value.

  • You have to avoid Lazy initialization. It leads to problems later.
    But if you have to do it because the initialization is too heavy you have to do it like this:

    Static fields:

    // Lazy initialization holder class idiom for static fields
    private static class FieldHolder {
       static final FieldType field = computeFieldValue();
    }
    static FieldType getField() { return FieldHolder.field; }
    

    Instance fields:

    // Double-check idiom for lazy initialization of instance fields
    private volatile FieldType field;
    FieldType getField() {
       FieldType result = field;
       if (result == null) { // First check (no locking)
          synchronized(this) {
             result = field;
             if (result == null) // Second check (with locking)
                field = result = computeFieldValue();
          }
       }
       return result;
    }
    

    Acording to Joshua Bolch book's "Effective Java™ Second Edition" (ISBN-13: 978-0-321-35668-0):
    "Use lazy initialization judiciously"

    Zarkonnen : Actually, your double checked locking code isn't quite threadsafe. And your indentation makes it very non-obvious that the call to computeFieldValue() is inside the if (result == null).
    Zarkonnen : http://en.wikipedia.org/wiki/Double-checked_locking to back up my "not quite threadsafe" claim.
    damian : I fix the identation. About the claim: So Joshua Bolch is wrong? In the wikipedia it saids: "As of J2SE 5.0, this problem has been fixed. The volatile keyword now ensures that multiple threads handle the singleton instance correctly"
    Zarkonnen : Re: damian: Actually, you're right, once you use volatile, it's fine. Apologies.

Signs of a great SQL developer

Based on their work, how do you distinguish a great SQL developer?

Examples might include:

Seldom uses CURSORs, and tries to refactor them away.
Seldom uses temporary tables, and tries to refactor them away.
Handles NULL values in OUTER JOINs with confidence.
Avoids SQL extensions that are not widely implemented. Knows how to indent with elegance.

From stackoverflow
  • I don't think that cursors, temporary tables or other SQL practices are inherently bad or that their usage is a clear sign of how good a database programmer is.

    I think there is the right tool for every type of problem. Sure, if you only have a hammer, everything looks like a nail. I think a great SQL programmer or database developer is a person who knows which tool is the right one in a specific situation. IMHO you can't generalize excluding specific patterns.

    But a rule of thumb may be: a great database developer will find a more short and elegant solution for complex situations than the average programmer.

  • Here are a few things that don't apply to run-of-the-mill software developers, but do apply to someone with good SQL skills:

    • Defines beneficial indexes, but not redundant or unused indexes.
    • Employs transactions effectively.
    • Values referential integrity.
    • Applies normalization to database design.
    • Thinks in terms of sets, not in terms of loops.
    • Uses JOIN confidently.
    • Knows how NULL and tri-value logic works.
    • Understands the uses and benefits of query parameters.

    The examples you give, of not using cursors, temp tables, or knowing 3 alternative queries for a given task, I would not consider indications of being a great SQL developer. Perhaps I would call someone who does those things an "acrobat."

    RoadWarrior : These are marks of a good SQL developer, but I don't think they're sufficient to distinguish a "great" developer.
    le dorfier : OK, with the qualifier that there are conservative acrobats, and foolish acrobats.
    Bill Karwin : Yes, that was what I meant. An acrobat can exhibit impressive artistry, skill, and talent. But often there are easier ways to get from point A to point B.
  • I've found that a great SQL developer is usually also a great database designer, and will prefer to be involved in both the design and implementation of the database. That's because a bad database design can frustrate and hold back even the best developer - good SQL instincts don't always work right in the face of pathological designs, or systems where RI is poor or non-existent. So, one way to tell a great SQL developer is to test them on data modeling.

    Also, a great DB developer has to have complex join logic down cold, and know exactly what the results of various multi-way joins will be in different situations. Lack of comfort with joins is the #1 cause of bad SQL code (and bad SQL design, for that matter).

    As for specific syntax things, I'd hesitate at directives like:

    Does not use CURSORs.

    Does not use temporary tables.

    Use of those techniques might allow you to tell the difference between a dangerously amateur SQL programmer (who uses them when simple relational predicates would be far better) and a decent starting SQL programmer (who knows how to do most stuff without them). However, there are many situations in real world usage where temp tables and cursors are perfectly adequate ways (sometimes, the only ways) to accomplish things (short of moving to another layer to do the processing, which is sometimes better anyway).

    So, use of advanced concepts like these isn't forbidden, but unless you're clearly dealing with a SQL expert working on a really tough problem that, for some reason, doesn't lend itself to a relational solution ... yeah, they're probably warning signs.

    le dorfier : I especially like the comment in regard to mispartioned functionality. I think that's more often than not the use of cursors - the dbms is being used to solve some other layer's problem.
    Ian Varley : And, as with most things ... "it depends". Various architectural and fuzzy reasons might convince one to write code on the DB that would be more appropriate on another layer, and those are times when these advanced DB features are useful.
  • Just to add to the already great answers; The developer can reduce a complex problem to something simple and easy to maintain.

  • Knows how to use INFORMATION_SCHEMA and table metadata in order to write either generic code or to code generate code in order to save repetitive database tasks.

Css margin-top vs margin-bottom

If you have a sequence of block elements and you wanted to place margin in between them.

Which do you prefer, margin-top or margin-bottom or both? Why?

From stackoverflow
  • I always use margin bottom, which means there is no unnecessary space before the first element.

  • This really depends on what you're designing it for and why.

    Something you could do, which is helpful, is setup generic styles for default padding/margins you commonly will be using, and then just append the style needed to the element as needed.

    Like so:

    .bottom10 { margin-bottom: 10px; }
    .top10    { margin-top: 10px;    }
    
    <div class="myclass top10">...</div>
    

    CSS will let you apply multiple values to an object and this is very reusable.

    EDIT:

    Keep in mind, this is still better than inline styles, and it also allows you to give more flexibility to your CMS or templating system.

    Cheers!

    BlaM : If you start doing THAT (defining style in your markup), you might as well use the style attribute.
    thismat : It's an example, they would both reside in their separate folders. Using generic styles is never a bad thing when it saves you bloating your CSS and it also makes universal changes simpler when you're templating.
  • Depends on context. But, generally margin-top is better because you can use :first-child to remove it. Like so:

    div.block {
        margin-top: 10px;
    }
    
    div.block:first-child {
        margin-top: 0;
    }
    

    This way, the margins are only in between the blocks.

    Only works for more modern browsers, obviously.

    thismat : Keep in mind that pseudo elements like this tend to break in older browsers.
    ken : does this work on IE<=6
    thismat : I've had trouble with pseudo classes like this before, and tend to shy away from them until more modern browsers become the "old".
    thismat : http://www.satzansatz.de/cssd/pseudocss.html - More research might help you, but from quickly googling I don't see anything too promising that doesn't involve a hack.
    sblundy : You're targeting IE 6 and earlier? That'll make things ugly no matter what you do.
    Andy Ford : @Ken - No, the :first-child pseudo element does not work in IE6 and below (does anyone still target ie5? if so, sucks to be you). However it does work in ie7, Firefox, Safari, Opera, and Chrome. You can target first-child and last-child via jquery (& probably any other js lib, or with plain js)
  • @This Mat - I disagree with your approach. I would assign spacing on elements in a semantic fashion, and use contextual selectors to define behavior for that collection of elements.

    .content p { /* obviously choose a more elegant name */
       margin-bottom: 10px;
    }
    

    Naming classes after their behavior instead of their content is kind of a slippery slope, and muddies up the semantic nature of your HTML. For example, what if in one area of your page, all elements with class .top10 suddenly needed 20 pixels instead? Instead of changing a single rule, you would have to create a new class name, and change it on all the elements you wanted to affect.

    To answer the original question, it depends entirely on how you want elements to stack. Do you want extra space at the top or the bottom?

Getting the property name that a value came from

I would like to know how to get the name of the property that a method parameter value came from. The code snippet below shows what I want to do:

Person peep = new Person();
Dictionary<object, string> mapping = new Dictionary<object, string>();
mapping[peep.FirstName] = "Name";
Dictionary<string, string> propertyToStringMapping = Convert(mapping);
if (mapping[peep.FirstName] == propertyToStringMapping["FirstName"])
  Console.WriteLine("This is my desired result");

private Dictionary<string, string> Convert(Dictionary<object, string> mapping)
{
   Dictionary<string, string> stringMapping = new Dictionary<string, string>();
   foreach (KeyValuePair<object, string> kvp in mapping)
   {
     //propertyName should eqal "FirstName"
     string propertyName = kvp.Key??????
     stringMapping[propertyName] = kvp.Value;
   }
   return stringMapping;
}
From stackoverflow
  • You are not able to do so in this way, since the way it works is that C# evaluates the value of FirstName property by calling its get accessor and passes the value of that to the indexer of the dictionary. Therefore, the way you found out FirstName value is completely lost. Just like the way you evaluate 2 + 2. If you write, "x = 2 + 2", x will have the value 4 but there will be no way to tell if it was 3 + 1 or 2 + 2 or 5 + (-1) or ... that evaluated to 4.

  • I think ultimately you will need to store either the PropertyInfo object associated with the property, or the string representation of the property name in you mapping object. The syntax you have:

    mapping[peep.FirstName] = "Name";
    

    Would create an entry in the dictionary with a key value equal to the value of the peep.FirstName property, and the Value equal to "Name".

    If you store the property name as a string like:

    mapping["FirstName"] = "Name";
    

    You could then use reflection to get the "FirstName" property of your object. You would have to pass the "peep" object into the Convert function, however. This seems to be somewhat opposite of what you are wanting to do.

    You may also be able to get crazy with Expressions and do something like:

    var mapping = new Dictionary<Expression<Action<T>>,string>();
    mapping[ p => p.FirstName ] = "Name";
    

    Then in your Convert function you could examine the expression. It would look something like:

    private Dictionary<string,string> Convert(Dictionary<Expression<Action<T>>,string> mapping)
    {
        var result = new Dictionary<string,string>();
        foreach(var item in mapping)
        {
            LambdaExpression ex = item.Key as LambdaExpression;
            string propertyName = ((MemberExpression)ex.Body).Member.Name;
            string propertyValue = item.Value;
            result.Add(propertyName,proeprtyValue);
        }
        return result;
    }
    

    This is more or less off the top of my head, so I may have the expression types off a bit. If there are issues with this implementation let me know and I will see if I can work out a functional example.

  • I don't know much about C#, but I suppose peep is an enum? As for Java, you could do:

    String propertyName = kvp.key.toString()
    

    Maybe there's something similar in C#?

    And even if peep isn't a enum: I see no reason why the key should be an arbitrary object? So maybe the solution is exactly to use an enum as type of the key?

    Also, I don't know what you're trying to do but usually, I'd not recommend you to convert the enum key to a string. What can you do with a string what you can't do, too, with an enum?