Tuesday, February 8, 2011

Are there any decent UI components for touch screen web applications?

For various reasons a web application would be best suite a project that I am working on but I am worried about the user interface. There will be a lot of pick and choose options that could be handled by check lists, combo boxes, etc… and to a lesser extent their will be some free text fields. However, I am concerned about the usability of standard components because users will have to access the application from touchscreen computers that will be wall mounted in a manufacturing environment (i.e. they will be very dirty and poorly maintained).

  • As a touch is translated to a click, you can probably use mostly standard components, maybe supplemented by javascript. I. e., it should be easily possible to implement an onClick event for every label that toggles the associated checkbox.

    I'd worry more about the text input. Touchscreen typing (especially when wall-mounted) sounds tedious.

    From MattW.
  • We are currently in the process of rolling out an application that is exactly as you describe. There are a number of issues that you will run into.

    You will probably need a "Soft keyboard" at some point. We have not found a decent third party one, but they are not too difiicult to write yourself.

    If you want to implement any kind of keypress button that writes text into another control, you need to be able to call the SetStyle() method to ensure that focus does not change. We found that the Janus button controls did not allow us to make this change so we reverted back to the standard winforms button.

    I have not seen any existing component libraries that are designed specifically for touch screens. We have used a combination of the standard winforms controls and the Janus UI components.

    If I were starting again now though, I would start with WPF. It is such a huge improvement over Winforms that it would be an easy choice for me.

    If you are really stuck with doing it in a web browser, then I would consider Silverlight as a viable option. Personally, I would never touch HTML for an application where quick data entry is important.

    Don't forget about bar-code input, sooner or later someone is going to tell you they want to do input with a scanner.

  • I'm currently working on a touch screen web application myself and keep wondering when I would "have" to put in a soft keyboard. The modules currently being developed deal only with order entry and retrieval/dispatch functionality and the client wants to limit any input by the call center attendant whereever possible. So no data input yet.

    However, I've been searching for a keyboard for a touch screen myself. Darryl, where would you suggest I should begin if I had to write one?

    Good luck to both of you!

  • You might want to take a look at Baobab Health's open source touchscreen tookit. It does a nice job converting an html form into a touchscreen interaction using only javascript. Documentation is a little light, but it might at least be a good starting point.

    From Cory
  • You can use iPhone-like effect component for windows:

    http://Win2iEen.add-soft.jp/

    It will be helpful for you.

    From Mats

Is a variable named i unacceptable?

As far as variable naming conventions go, should iterators be named i or something more semantic like count? If you don't use i, why not? If you feel that i is acceptable, are there cases of iteration where it shouldn't be used?

  • It helps if you name it something that describes what it is looping through. But I usually just use i.

    From mrinject
  • i is definitely acceptable. Not sure what kind of justification I need to make -- but I do use it all of the time, and other very respected programmers do as well.

    Social validation, I guess :)

    From Ian P
  • Yes, in fact it's preferred since any programmer reading your code will understand that it's simply an iterator.

  • Take a look at http://stackoverflow.com/questions/101070/what-is-an-ideal-variable-naming-convention-for-loop-variables, which addresses this question.

    VirtuosiMedia : Nothing came up in the similar questions section when I was typing this, so I didn't see that question. Thanks for that, though.
    Jon Ericson : @VirtuosiMedia: That's ok. Now we have another keyword for people to search on.
  • As long as you are either using i to count loops, or part of an index that goes from 0 (or 1 depending on PL) to n, then I would say i is fine.

    Otherwise its probably easy to name i something meaningful it its more than just an index.

  • If the "something more semantic" is "iterator" then there is no reason not to use i; it is a well understood idiom.

  • "i" means "loop counter" to a programmer. There's nothing wrong with it.

  • I tend to use i, j, k for very localized loops (only exist for a short period in terms of number of source lines). For variables that exist over a larger source area, I tend to use more detailed names so I can see what they're for without searching back in the code.

    By the way, I think that the naming convention for these came from the early Fortran language where I was the first integer variable (A - H were floats)?

    VirtuosiMedia : That's an interesting little tidbit about the history. Thanks.
    Greg Rogers : as they say "God is real, unless declared integer"
    From paxdiablo
  • Here's another example of something that's perfectly okay:

    foreach (Product p in ProductList)
    {
        // Do something with p
    }
    
    Steve Jessop : Absolutely. Nobody ever complains that mathematical proofs which read, "consider a continuous function, f" are incomprehensible because the function is called "f" rather than "a_continuous_function".
    From Ian P
  • I should point out that i and j are also mathematical notation for matrix indices. And usually, you're looping over an array. So it makes sense.

  • Depends on the context I suppose. If you where looping through a set of Objects in some collection then it should be fairly obvious from the context what you are doing.

    for(int i = 0; i < 10; i++)
    {
        // i is well known here to be the index
        objectCollection[i].SomeProperty = someValue;
    }
    

    However if it is not immediately clear from the context what it is you are doing, or if you are making modifications to the index you should use a variable name that is more indicative of the usage.

    for(int currentRow = 0; currentRow < numRows; currentRow++)
    {
        for(int currentCol = 0; currentCol < numCols; currentCol++)
        {
            someTable[currentRow][currentCol] = someValue;
        }
    }
    
    Martin Beckett : I would use iRow and iCol, but then I started on FORTRAN
    Josh : FORTRAN is still awesome for scientific computing. Lol, scientists will actually laugh at you if you mention other languages.
    sixlettervariables : Double any single character variable to make searching useful. ii v. i, xx v. x.
    Outlaw Programmer : The 'double any single character' trick is less useful when you have an IDE with built in refactoring wizards. As for Josh's answer, this is basically a perfect example of what to do. Respeck!
    Orion Edwards : for a nested loop over a 2 dimensional array, the convention is to use i, then j. currentRow and currentCol are way too wordy :-) That said, I agree with the point, which is that if things are not clear, don't use i
    mat_geek : currentRow and currentCol could easily be called just row and col.
    Camilo Díaz : Orion Edwards is right. For the 2nd case, you'd better stick to using i, j.
    Dan Walker : In the first instance however, you would be better off using foreach (assuming the language supports it)
    Thomas Owens : For a two-d array that represents a grid, I often use x and y, which correspond to the concept of x and y axis.
    demoncodemonkey : agree with mat_geek about row & col
    From Josh
  • As long as you're using it temporarily inside a simple loop and it's obvious what you're doing, sure. That said, is there no other short word you can use instead?

    i is widely known as a loop iterator, so you're actually more likely to confuse maintenance programmers if you use it outside of a loop, but if you use something more descriptive (like filecounter), it makes code nicer.

    From Dan Udey
  • i is acceptable, for certain. However, I learned a tremendous amount one semester from a C++ teacher I had who refused code that did not have a descriptive name for every single variable. The simple act of naming everything descriptively forced me to think harder about my code, and I wrote better programs after that course, not from learning C++, but from learning to name everything. Code Complete has some good words on this same topic.

  • What is the value of using i instead of a more specific variable name? To save 1 second or 10 seconds or maybe, maybe, even 30 seconds of thinking and typing?

    What is the cost of using i? Maybe nothing. Maybe the code is so simple that using i is fine. But maybe, maybe, using i will force developers who come to this code in the future to have to think for a moment "what does i mean here?" They will have to think: "is it an index, a count, an offset, a flag?" They will have to think: "is this change safe, is it correct, will I be off by 1?"

    Using i saves time and intellectual effort when writing code but may end up costing more intellectual effort in the future, or perhaps even result in the inadvertent introduction of defects due to misunderstanding the code.

    Generally speaking, most software development is maintenance and extension, so the amount of time spent reading your code will vastly exceed the amount of time spent writing it.

    It's very easy to develop the habit of using meaningful names everywhere, and once you have that habit it takes only a few seconds more to write code with meaningful names, but then you have code which is easier to read, easier to understand, and more obviously correct.

    From Wedge
  • I use i for short loops.

    The reason it's OK is that I find it utterly implausible that someone could see a declaration of iterator type, with initializer, and then three lines later claim that it's not clear what the variable represents. They're just pretending, because they've decided that "meaningful variable names" must mean "long variable names".

    The reason I actually do it, is that I find that using something unrelated to the specific task at hand, and that I would only ever use in a small scope, saves me worrying that I might use a name that's misleading, or ambiguous, or will some day be useful for something else in the larger scope. The reason it's "i" rather than "q" or "count" is just convention borrowed from mathematics.

    I don't use i if:

    • The loop body is not small, or
    • the iterator does anything other than advance (or retreat) from the start of a range to the finish of the loop:

    i doesn't necessarily have to go in increments of 1 so long as the increment is consistent and clear, and of course might stop before the end of the iterand, but if it ever changes direction, or is unmodified by an iteration of the loop (including the devilish use of iterator.insertAfter() in a forward loop), I try to remember to use something different. This signals "this is not just a trivial loop variable, hence this may not be a trivial loop".

  • It depends. If you're iterating over some particular set of data then I think it makes more sense to use a descriptive name. (eg. filecounter as Dan suggested).

    However, if you're performing an arbitrary loop then i is acceptable. As one work mate described it to me - i is a convention that means "this variable is only ever modified by the for loop construct. If that's not true, don't use i"

  • The use of i, j, k for INTEGER loop counters goes back to the early days of FORTRAN.
    Personally I don't have a problem with them so long as they are INTEGER counts.
    But then I grew up on FORTRAN!

    From DaveF
  • i is fine, but something like this is not:

    for (int i = 0; i < 10; i++)
    {
        for (int j = 0; j < 10; j++)
        {
            string s = datarow[i][j].ToString(); // or worse
        }
    }
    

    Very common for programmers to inadvertently swap the i and the j in the code, especially if they have bad eyesight or their Windows theme is "hotdog". This is always a "code smell" for me - it's kind of rare when this doesn't get screwed up.

  • i is so common that it is acceptable, even for people that love descriptive variable names.

    What is absolutely unacceptable (and a sin in my book) is using i,j, or k in any other context than as an integer index in a loop.... e.g.

    foreach(Input i in inputs)
    {
        Process(i);
    
    }
    
  • i think i is completely acceptable in for-loop situations. i have always found this to be pretty standard and never really run into interpretation issues when i is used in this instance. foreach-loops get a little trickier and i think really depends on your situation. i rarely if ever use i in foreach, only in for loops, as i find i to be too un-descriptive in these cases. for foreach i try to use an abbreviation of the object type being looped. e.g:

    foreach(DataRow dr in datatable.Rows)
    {
        //do stuff to/with datarow dr here
    }
    

    anyways, just my $0.02.

  • my feeling is that the concept of using a single letter is fine for "simple" loops, however, i learned to use double-letters a long time ago and it has worked out great.

    i asked a similar question last week and the following is part of my own answer:

    // recommended style              ●    // "typical" single-letter style
                                      ●
    for (ii=0; ii<10; ++ii) {         ●    for (i=0; i<10; ++i) {
        for (jj=0; jj<10; ++jj) {     ●        for (j=0; j<10; ++j) {
            mm[ii][jj] = ii * jj;     ●             m[i][j] = i * j;
        }                             ●        }
    }                                 ●    }
    in case the benefit isn't immediately obvious: searching through code for any single letter will find many things that aren't what you're looking for. the letter i occurs quite often in code where it isn't the variable you're looking for.

    i've been doing it this way for at least 10 years.

    note that plenty of people commented that either/both of the above are "ugly"...

    Jon Ericson : And for people with editors that can search on word breaks, completely pointless.
    Andrei Rinea : Offtopic a little : why do you preincrement the loop variable in the for statement? I usually see **POSTINCREMENT** instead of preincrement.
    just mike : efficiency. it comes from my early experience with the C language when the assembly optimizers weren't perfect. if you post-increment, the expressions is first evaluated, then incremented, then evaluated again. if you pre-increment, there's no initial evaluation, just incremented then evaluated.
    From just mike

How to receive UDP Multicast in VxWorks 5.5

I have been unable to receive UDP multicast under VxWorks 5.5. I've joined the multicast group:

setsockopt(soc, IPPROTO_IP, IP_ADD_MEMBERSHIP, (char *) &ipMreq, sizeof (ipMreq));

Similar code on an adjacent Windows machine does receive multicast. I am able to send multicast from VxWorks; ifShow() indicates the interface is multicast capable; MCAST_ROUTING is enabled in the kernel config, but still unable to receive multicast.

Edit: I needed to set a bit in the RealTek Ethernet drive RX configuration register to enable multicast to be passed on to the application layer.

#define RTL_RXCG_AM           0x04      /* Accept Multicast */
  • Are you checking the return value on the Join setsockopt() call to be sure it's actually succeeding? I had a specific problem with VxWorks 5.5 in the past where my multicast joins were failing when they shouldn't be. I believe we had to get new libraries from WindRiver to fix the issue.

    Edit: There is no specific trick that I'm aware of to getting multicast to work with VxWorks. It should use the standard BSD sockets operations. If the interface can receive unicast traffic properly, and a network analyzer (Wireshark, for instance) shows that the multicast JOINs are being sent and the inbound multicast packets are correctly formed, I would suspect a driver issue. WindRiver support has been very helpful for us in the past with these sorts of problems; I don't know if you have a support contract with them to get that level of assistance.

    jholl : Yes, setsockopt() returns NO_ERROR.
    From mbyrne215

What strategies and tools are useful for finding memory leaks in .net?

I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.

For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the c# philosophy is that you shouldn't have to worry about such things when you very definitely do.

One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.

What kind of strategies and tools are useful for tackling memory leaks in .net?

  • I use Scitech's MemProfiler when I suspect a memory leak.

    So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.

    The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.

    Scott Langham : Yep, I had a go with this one, and it helped me get to the bottom of some tricky leaks. The largest leaks I had turned out to be caused by third party libraries in unmanaged code they accessed via interop. I was impressed that this tool detected leaks in unmanaged code as well as managed code.
    Scott Langham : I've accepted this as the answer because its what worked for me in the end, but I think all of the other answers are very useful. By the way, this tool is more commonly called SciTech's Mem Profiler!
    Geoffrey Chetwood : @Scott: Good point about Scitech, edited now.
  • You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)

    Scott Langham : Yep. CLRPRofiler is pretty cool. It can get a bit explosive with information when trying to dig through the view it gives you of allocated objects, but everything is there. It's definitely a good starting point, especially as its free.
    From Zac
  • We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.

    We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.

    EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:

    class ParentObject
       Private mRelatedObject as New CRelatedObject
       public Readonly property RelatedObject() as CRelatedObject
          get
             mRelatedObject.getWithID(RelatedObjectID)
             return mRelatedObject
          end get
       end property
    End class
    

    We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:

    class ParentObject
       Private mRelatedObject as CRelatedObject
       Public ReadOnly Property RelatedObject() as CRelatedObject
          Get
             If mRelatedObject is Nothing
                mRelatedObject = New CRelatedObject
             End If
             If mRelatedObject.isEmptyObject
                mRelatedObject.getWithID(RelatedObjectID)
             End If
             return mRelatedObject
          end get
       end Property
    end class
    

    This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.

    Gord : I second this product. It was one of the best profilers that I have used.
    Alexandre Brisebois : can you give us some resources on Inflate on-demande ?
    Scott Langham : I found the profiler to be quite good for looking at performance issues. However, the memory analysis tools were pretty poor. I found a leak with this tool, but it was rubbish at helping me identify the cause of the leak. And it doesn't help you at all if the leak happens to be in unmanaged code.
    Scott Langham : Ok, the new version 5.1, is a heck of a lot better. It's better at helping you find the cause of the leak (although - there are still a couple of problems with it that ANTS have told me they'll fix in the next version). Still doesn't do unmanaged code though, but if you're not bothered about unmanaged code, this is now a pretty good tool.
    From Mark
  • The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore. If you are not going to use something anymore, get rid of it.

    Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.

    From Gord
  • One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.

    You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.

    Then analyze offline.

    Scott Langham : Yes, this works well, especially for more advanced stuff or diagnosing problems in released software that you can't easily attach a debugger to. This blog has lots of tips on using these tools well: http://blogs.msdn.com/tess/
  • If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.

    However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.

  • Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:

        public void Dispose ()
        {
            // Dispose logic here ...
    
            // It's a bad error if someone forgets to call Dispose,
            // so in Debug builds, we put a finalizer in to detect
            // the error. If Dispose is called, we suppress the
            // finalizer.
    #if DEBUG
            GC.SuppressFinalize(this);
    #endif
        }
    
    #if DEBUG
        ~TimedLock()
        {
            // If this finalizer runs, someone somewhere failed to
            // call Dispose, which means we've failed to leave
            // a monitor!
            System.Diagnostics.Debug.Fail("Undisposed lock");
        }
    #endif
    
    Scott Langham : I like it. I've been using this and it works well for me.
    From Jay Bazuzi
  • I have found a few really good articles that have been usefull to me when looking at memory issues in .NET and I have kept a reference to them so I have them around.

    Debugging Memory Problems (MSDN)

    Debugging Tools for Windows

    SOS Debugging Extensions

    These have all been very useful. I come from a C++ background too so I know what you mean. In the end there is a lot of overlap in the tools that you use to look at these problems. Hope this helps.

  • Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.

    Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.

    From How to identify memory leaks in the common language runtime at Microsoft.com

    A memory leak can occur in a .NET Framework application when you use unmanaged code as part of the application. This unmanaged code can leak memory, and the .NET Framework runtime cannot address that problem.

    Additionally, a project may only appear to have a memory leak. This condition can occur if many large objects (such as DataTable objects) are declared and then added to a collection (such as a DataSet). The resources that these objects own may never be released, and the resources are left alive for the whole run of the program. This appears to be a leak, but actually it is just a symptom of the way that memory is being allocated in the program.

    For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.

    One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.

    Constantin : See http://blogs.msdn.com/tess/archive/2006/01/23/net-memory-leak-case-study-the-event-handlers-that-made-the-memory-baloon.aspx. It doesn't really matter whether memory leak is "traditional" or not, it's still a leak.
    Timothy Lee Russell : I see your point -- but inefficient allocation and reuse of memory by a program is different than a memory leak.
    frameworkninja : good answer, thank you for remembering me that event handlers can be dangerous.
  • You may want to check out dotTrace by JetBrains (makers of Resharper). Fantastic tool!

    http://www.jetbrains.com/profiler/

    frameworkninja : I agreee totally.
  • Big guns - Debugging Tools for Windows

    This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)

    "If broken it is..." blog has very useful articles on the subject.

    From Constantin
  • This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.

    From twk

Hiding the header on an Infragistics Winform UltraCombo

I've gone through just about every property I can think of, but haven't found a simple way to hide the header on a winform UltraCombo control from Infragistics.

Headers make sense when I have multiple visible columns and whatnot, but sometimes it would be nice to hide it.

To give a simple example, let's say I have a combobox that displays whether something is active or not. There's a label next to it that says "Active". The combobox has one visible column with two rows -- "Yes" and "No".

When the user opens the drop down, they see "Active" or whatever the header caption for the column is and then the choices. I'd like it to just show "Yes" and "No" only.

It's a minor aesthetic issue that probably just bothers me and isn't even noticed by the users, but I'd still really like to know if there's a way around this default behavior.

RESOLUTION: As @Craig suggested, ColHeadersVisible is what I needed. The location of the property was slightly different, but it was easy enough to track down. Once I set DisplayLayout.Bands(0).ColHeadersVisible=False, the dropdown display the way I wanted it to.

  • <DropDownLayout ColHeadersVisible="No"></DropDownLayout> works for us. This is on Infragistics NetAdvantage for .NET 2008.

    Kevin Fairchild : Is this for the Winforms control or Web?
    Craig : This would be the web controls.
    From Craig
  • My understanding of the Infragistics WinForms suite is that the UltraCombo is designed for multi-column (or embedded UltraGrid) use.

    What I did to get around this was to replace those UltraCombos with UltraComboEditor controls. These are IG's "enhanced" versions of the standard .NET combobox.

    That may or may not be appropriate in your case, depending on your usage scenario. However, it looks like you have a resolution using the original UltraCombo, which will definitely be lower-impact on your existing code.

    (And thanks to you and Craig both: I actually overlooked that property when I went through this pain the first time; I'm making a mental note of where it is for the future!)

    From John Rudy

Benefits of multiple memcached instances

Is there any difference between having 4 .5GB memcache servers running or one 2GB instance?

Does running multiple instances offer any benifits?

  • High availability is nice, and memcached will automatically distribute your cache across the 4 servers. If one of those servers dies for some reason, you can handle that error by either just continuing as if the cache was blank, redirecting to a different server, or any sort of custom error handling you want. If your 1x 2gb server dies, then your options are pretty limited.

    The important thing to remember is that you do not have 4 copies of your cache, it is 1 cache, split amongst the 4 servers.

    The only downside is that it's easier to run out of 4x .5 than it is to run out of 1x 2gb memory.

    Alister Bulman : distributing entries across servers is the job of the client. There are a number of techniques to aid in that, which are laid out in the various memcached FAQs and clients.
  • If one instance fails, you're still get advantages of using the cache. This is especially true if you are using the Consistenthashing that will bring the same data to the same instance, rather than spreading new reads/writes among the machines that are still up.

    You may also elect to run servers on 32 bit operating systems, that cannot address more than around 3GB of memory.

    Check the FAQ: http://www.socialtext.net/memcached/ and http://www.danga.com/memcached/

  • I would also add that theoretically, in case of several machines, it might save you some performance, as if you have a lot of frontends doing a lot of heavy reads, it's much better to split them into different machines: you know, network capabilities and processing power of one machine can become an upper bound for you.

    This advantage is highly dependent on memcache utilization, however (sometimes it might be ways faster to fetch everything from one machine).

    From Anton

Obtaining a collection of constructed subclassed types using reflection

I want to create a class which implements IEnumerable<T> but, using reflection, generates T's and returns them via IEnumerable<T>, where T' is a entirely constructed subclass of T with some properties hidden and others read-only.

Okay., that might not be very clear. Let me explain this via the medium of code - I'd like to have a class CollectionView<T> as follows:-

public class CollectionView<T> : IEnumerable<T> {
  public CollectionView(IEnumerable<T> inputCollection, 
    List<string> hiddenProperties, List<string> readonlyProperties) {
    // ...
  }

  // IEnumerable<T> implementation which returns a collection of T' where T':T.
}

...

public class SomeObject {
  public A { get; set; }
  public B { get; set; }
  public C { get; set; }
}

...

var hiddenProperties   = new List<string>(new[] { "A" });
var readOnlyProperties = new List<string>(new[] { "C" });

IEnumerable<SomeObject> someObjects = CollectionView<SomeObject>(hiddenProperties,
  readOnlyProperties);

...

dataGridView1.DataSource = someObjects;

(When displayed in dataGridView1 shows columns B and C and C has an underlying store which is read-only)

Is this possible/desirable or have I completely lost my mind/does this question demonstrate my deep inadequacy as a programmer?

I want to do this so I can manipulate a collection that is to be passed into a DataGridView, without having to directly manipulate the DataGridView to hide columns/make columns read-only. So no 'oh just use dataGridView1.Columns.Remove(blah) / dataGridView1.Columns[blah].ReadOnly = true' answers please!!

Help!

  • Castle.DynamicProxy will help you accomplish this. What you would do is create an interceptor that inherits T. You would store the collection of hidden and read-only properties. When a getter or setter is called, the interceptor would check to see if the property exists in either collection and then take appropriate action.

    However, I know not how you would hide a property. You cannot change the access modifier of a base class in a derived class. You MAY be able to use the new keyword, but I know not how to do that with Castle.DynamicProxy.

    From Gilligan
  • You just can't hide properties, even by creating subclassed proxies. You could at least construct a different type dynamically, which holds good properties, but it would not be a T.

    But returning an object list could be sufficient if you just need to use databinding.

  • I decided to take a different approach to this problem, I really wasn't seeing the wood for the trees! I decided to create an extension method which converts my IEnumerable to a data table which can then be passed around as required:-

    public static DataTable ToDataTable<T>(this IEnumerable<T> collection)
    {
        DataTable ret = new DataTable();
    
        Type type = typeof(T);
    
        foreach (PropertyInfo propertyInfo in type.GetProperties())
        {
            // Ignore indexed properties.
            if (propertyInfo.GetIndexParameters().Length > 0) continue;
            ret.Columns.Add(propertyInfo.Name);
        }
    
        foreach (T data in collection)
        {
            DataRow row = ret.NewRow();
            foreach (PropertyInfo propertyInfo in type.GetProperties())
            {
                // Ignore indexed properties.
                if (propertyInfo.GetIndexParameters().Length > 0) continue;
    
                row[propertyInfo.Name] = propertyInfo.GetValue(data, null);
            }
    
            ret.Rows.Add(row);
        }
    
        return ret;
    }
    
    From kronoz

How to read a password in Ruby

Hi,

I am running ruby and MySQL on a Windows box.

I have some Ruby code that needs to connect to a MySQL database a perform a select. To connect to the database I need to provide the password (amongst other things). My question is this: How can I get ruby to display the types password as a line of asterisks in the 'dos box'.

In other words, the ruby code can display a prompt requesting the password, the user types in the password and hits the enter key. What I need is for the password, as it is typed, to be displayed as a line of asterisks.

Many thanks.

  • Poor man's solution:

    system "stty -echo"
    # read password
    system "stty echo"
    

    Or using http://raa.ruby-lang.org/project/ruby-password/

    The target audience for this library is system administrators who need to write Ruby programs that prompt for, generate, verify and encrypt passwords.

    Edit: Whoops I failed to notice that you need this for Windows :(

    From jk
  • To answer my own question - and for the benefit of anyone else who would like to know:

    There is a ruby gem called highline that you need.

    require 'rubygems'
    require 'highline/import'
    
    def get_password(prompt="Enter Password")
       ask(prompt) {|q| q.echo = false}
    end
    
    thePassword = get_password()
    

    Works a treat!

  • According to the Highline doc, this seems to work. Not sure if it will work on Windows.

    #!/usr/local/bin/ruby
    require 'rubygems'
    require 'highline/import'
    
    username = ask("Enter your username:  ") { |q| q.echo = true }
    password = ask("Enter your password:  ") { |q| q.echo = "*" }
    

    Here's the output on the console:

    $ ruby highline.rb 
    Enter your username:  doug
    Enter your password:  ******
    
    From Eric Monti

Should I store all projects in one repository or mulitiple?

I am currently using TortoiseSVN to manage a couple of the projects that I have on the go at the moment. When I first moved everything into source control I wasn't really sure how everything should be laid out so I ended up putting each project into its own repository.

I was wondering would it be a good idea for me just to move them all into one big repository and have them split into project folders? What does everyone else do?

At the moment none of them share common code but they may in the future. Would it make it easier to manage if they where all together.

Thanks.

  • Personally I prefer each project in it's own repository

    From paan
  • As long as each project has /trunk /tags and /branches you're good. Proper continuous integration is the criterion here.

    From Matt Hinze
  • Yes, put everything in source control.

    If you're using SVN, keep projects in their own repository - svn is slow, and gets slower.

    Nicholas Riley : While Subversion might be a bit slow, I've certainly not noticed it slow down with the kinds of load a personal project would be remotely likely to inflict; the added administrative overhead of multiple repositories would be much more annoying.
    From Marcin
  • I would absolutely keep each project in its own repository, separate from all others. This will give each project its own history of commits. Rollbacks on one project will not affect other projects.

    Dima : If you keep your projects in separate directories in a single repository, then each project would still have its own history of commits.
    Derek Park : Rolling back a change can be done at pretty much any granularity level. There's generally no reason for it to be done at the repository level. Changelogs can likewise be accessed at a directory level. You will be dealing with directories (branches) even if you have one project per repository.
  • My rule of thumb is to consolidate things that are delivered together. In other words, if you might deliver project X and project Y separately, then put them in separate repos.

    Yes, sometimes this means you have a huge repo for a project that contains a huge number of components, but people can operate on sub-trees of a repo and this forces them to think of the "whole project" when they commit changes to the repo.

    From andy
  • If your projects are independent, it's fine to keep them in separate repositories. If they share components, then put them together.

    From Dima
  • For Subversion, I'd suggest putting everything in the same repository; the administrative overhead of setting up a new repository is too high to make it a no-brainer, so you're more likely not to version something and regret it later. Subversion provides plenty of fine-grained access controls if you need to restrict access to a portion of your repository.

    As I begin to migrate my projects to Mercurial, however, I've switched to creating a repository per project, because it just takes a "hg init" to create a new one in place, and I can use the hg forest extension to easily perform operations on nested repositories. Subversion has svn:externals, which are somewhat similar, but require more administrative overhead.

    Oddmund : Setting up SVN is not hard.
    Steve Jessop : I was going to say that, but then it occurred to me that maybe not everybody uses just local repositories for their own stuff. If you plan to set up a web interface, and access restrictions, and so on, for each repository, then it's more than no work. Although I imagine it's scriptable.
    Nicholas Riley : Yeah, that's what I meant - repositories that aren't network accessible aren't terribly useful to me as I work from at least 5 machines every day. (I realize I'm probably in the minority that way).
  • If you're going with a separate repository for each project, you might use the External tag to refer to other repositories -thus share code.

  • I would store them in the same repository. It's kind of neater. Plus why would it matter for continuous integration and such - you can always pull a specific folder from the repository.

    It's also easier to administer - accounts to one repository, access logs of one repository etc.

    From Svet
  • Depends to an extent what you mean by "project".

    I have a general local repository containing random bits of stuff that I write (including my website, since it's small). A single-user local SVN repository is not going to suffer noticeable performance issues until you've spent a lot of years typing. By which time SVN will be faster anyway. So I've yet to regret having thrown everything in one repository, even though some of the stuff in there is completely unrelated other than that I wrote it all.

    If a "project" means "an assignment from class", or "the scripts I use to drive my TiVo", or "my progress in learning a new language", then creating a repos per project seems a bit unnecessary to me. Then again, it doesn't cost anything either. So I guess I'd say don't change what you're doing. Unless you really want the experience of re-organising repositories, in which case do change what you're doing :-)

    However, if by "project" you mean a 'real' software project, with public access to the repository, then I think separate repos per project is what makes sense: partly because it divides things cleanly and each project scales independently, but also because it's what people will expect to see.

    Sharing code between separate repositories is less of an issue than you might think, since svn has the rather lovely "svn:externals" feature. This lets you point a directory of your repository at a directory in another repository, and check that stuff out automatically along with your stuff. See, as always, the SVN book for details.

  • If you work with a lot of other people you might consider whether everyone needs the same level of access to every project. I think it is easier to give access rights per person if you put each project in a separate repository. ~~~

Problem with adding graphics to TLabel

I'm trying to create with Delphi a component inherited from TLabel, with some custom graphics added to it on TLabel.Paint. I want the graphics to be on left side of text, so I overrode GetClientRect:

function TMyComponent.GetClientRect: TRect;
begin
  result := inherited GetClientRect;
  result.Left := 20;
end;

This solution has major problem I'd like to solve: It's not possible to click on the "graphics area" of the control, only label area. If the caption is empty string, it's not possible to select the component in designer by clicking it at all. Any ideas?

  • What methods/functionality are you getting from TLabel that you need this component to do?

    Would you perhaps be better making a descendent of (say, TImage) and draw your text as part of it's paint method?

    If it's really got to be a TLabel descendant (with all that this entails) then I think you'll be stuck with this design-time issue, as doesn't TLabel have this problem anyway when the caption is empty?

    I'll be interested in the other answers you get! :-)

    Harriv : Basically I was just thinking "I need label with graphics", and since the text drawing is already there adding only the graphics part was my first bet for this problem.
    robsoft : That makes sense. (sees your comment to @neftali's suggestion) - Glad that have got a solution now!
    From robsoft
  • At first excuse-me for my bad English.
    I Think that is not a good idea Change the ClientRect of the component. This property/information is used for many methods and procedures internals at component and you can change the functionemnt/operation of the component.

    I think that you can change the point to write the text (20 pixels in the DoDrawText procedure -in example-) and the component can respond on events in the graphic area.

    procedure TGrlabel.DoDrawText(var Rect: TRect; Flags: Integer);
    begin
      Rect.Left := 20;
      inherited;
    end;
    
    procedure TGrlabel.Paint;
    begin
      inherited;
    
      Canvas.Brush.Color := clRed;
      Canvas.Pen.Color := clRed;
      Canvas.pen.Width := 3;
      Canvas.MoveTo(5,5);
      Canvas.LineTo(15,8);
    
    end;
    
    Harriv : Yes, that seems to do the trick. Thank you.
    From Neftalí

What is the best way to profile PHP code

I'd like to find a way to determine how long each function in PHP, and each file in PHP is taking to run. I've got an old legacy PHP application that I'm trying to find the "rough spots" in and so I'd like to locate which routines and pages are taking a very long time to load, objectively.

Are there any pre-made tools that allow for this, or am I stuck using microtime, and building my own profiling framework?

  • take a look into xdebug, which allows in-depth profiling. And here's an explanation of how to use xdebug.

    Xdebug's Profiler is a powerful tool that gives you the ability to analyze your PHP code and determine bottlenecks or generally see which parts of your code are slow and could use a speed boost. The profiler in Xdebug 2 outputs profiling information in the form of a cachegrind compatible file.

    Kudos to SchizoDuckie for mentioning Webgrind. This is the first I've heard of it. Very useful (+1).

    Otherwise, you can use kcachegrind on linux or its lesser derivative wincachegrind. Both of those apps will read xdebug's profiler output files and summarize them for your viewing pleasure.

    From enobrev
  • If you install the xdebug extension you can set it up to export run profiles, that you can read in WinCacheGrind (on Windows). I can't recall the name of the app that reads the files on Linux.

    From mabwi
  • I once saw a screen-cast for Zend Core. Looks pretty good, but it actually costs money, I don't know if that's an issue for you.

    From jakemcgraw
  • xdebug's profiling functions are pretty good. If you get it to save the output in valgrind-format, you can then use something like KCachegrind or Wincachegrind to view the call-graph and, if you're a visual kind of person, work out more easily what's happening.

    From Greg
  • XDebug is nice but its not that easy to use or setup IMO.

    The profiler built into Zend Studio is very easy to use. You just hit a button on a browser toolbar and BAM you have your code profile. ts perhaps not as indepth as a CacheGrind dump, but its always been good enough for me.

    You do need to setup Zend Platform too, but thats straightforward and free for development use - you'd still have to pay for the Zend Studio licence though.

    From Marc Gear
  • I have actually done some optimisation work last week. XDebug is indeed the way to go.

    Just enable it as an extension (for some reason it wouldn't work with ze_extension on my windows machine) , setup your php.ini with xdebug.profiler_enable_trigger=On and call your normal urls with XDEBUG_PROFILE=1 as either a get or a post variable to profile that very request. There's nothing easier!

    Also, i can really reccommend webgrind , a webbased (php) google Summer Of Code project that can read and parse your debug output files!

  • The easiest solution is to use Zend Profiler, you don't need Zend Platform to use is, you can run it directly from your browser, it's quite accurate and has the most features you need and it's integrated in the Zend Studio

    From andy.gurin
  • In addition to having seriously powerful real-time debugging capabilities, PhpED from NuSphere (www.nusphere.com) has a built-in profiler that can be run with a single click from inside the IDE.

  • Here is a nice tip.

    When you use XDebug to profile your PHP, set up the profiler_trigger and use this in a bookmarklet to trigger the XDebug profiler ;)

    javascript:if(document.URL.indexOf('XDEBUG_PROFILE')<1){var%20sep=document.URL.indexOf('?');sep%20=%20(sep<1)?'?':'&';window.location.href=document.URL+sep+'XDEBUG_PROFILE';}