Sunday, February 13, 2011

How to keep your own debug lines without checking them in?

When working on some code, I add extra debug logging of some kind to make it easier for me to trace the state and values that I care about for this particular fix.

But if I would check this in into the source code repository, my colleagues would get angry on me for polluting the Log output and polluting the code.

So how do I locally keep these lines of code that are important to me, without checking them in?

Clarification: Many answers related to the log output, and that you with log levels can filter that out. And I agree with that.

But. I also mentioned the problem of polluting the actual code. If someone puts a log statement between every other line of code, to print the value of all variables all the time. It really makes the code hard to read. So I would really like to avoid that as well. Basically by not checking in the logging code at all. So the question is: how to keep your own special purpose log lines. So you can use them for your debug builds, without cluttering up the checked in code.

  • What source control system are you using? Git allows you to keep local branches. If worse comes to worst, you could just create your own 'Andreas' branch in the repository, though branch management could become pretty painful.

    From swilliams
  • But if I would check this in into the source code repository, my colleagues would get angry on me for polluting the Log output and polluting the code.

    I'm hoping that your Log framework has a concept of log levels, so that your debugging could easily be turned off. Personally I can't see why people would get angry at more debug logging - because they can just turn it off!

    From matt b
  • Why not wrap them in preprocessor directives (assuming the construct exists in the language of your choice)?

    #if DEBUG
        logger.debug("stuff I care about");
    #endif
    

    Also, you can use a log level like trace, or debug, which should not be turned on in production.

    if(logger.isTraceEnabled()) {
        logger.log("My expensive logging operation");
    }
    

    This way, if something in that area does crop up one day, you can turn logging at that level back on and actually get some (hopefully) helpful feedback.


    Note that both of these solutions would still allow the logging statements to be checked in, but I don't see a good reason not to have them checked in. I am providing solutions to keep them out of production logs.

    DOK : I believe that keeps the code out of the compiled code, but it will still be in the code that is checked into the source control repository.
    Chris Marasti-Georg : It would be in the code that is checked into the repository, but it would keep the log directives completely out of production releases.
  • Similar to

    #if DEBUG #endif....
    

    But that will still mean that anyone running with the 'Debug' configuration will hit those lines.

    If you really want them skipped then use a log level that no one else uses, or....

    Create a different run configuration called MYDEBUGCONFIG and then put your debug code in between blocks like this:

    #if MYDEBUGCONFIG 
    ...your debugging code
    #endif;
    
    From
  • If this was really an ongoing problem, I think I'd assume that the central repository is the master version, and I'd end up using patch files to contain the differences between the official version (the last one that I worked on) and my version with the debugging code. Then, when I needed to reinstate my debug, I'd check out the official version, apply my patch (with the patch command), fix the problem, and before check in, remove the patch with patch -R (for a reversed patch).

    However, there should be no need for this. You should be able to agree on a methodology that preserves the information in the official code line, with mechanisms to control the amount of debugging that is produced. And it should be possible regardless of whether your language has conditional compilation in the sense that C or C++ does, with the C pre-processor.

  • IMHO, you should avoid the #if solution. That is the C/C++ way of doing conditional debugging routines. Instead attribute all of logging/debugging functions with the ConditionalAttribute. The constructor of the attribute takes in a string. This method will only be called if the particular pre-processor definition of the same name as the attribute string is defined. This has the exact same runtime implications as the #if/#endif solution but it looks a heck of a lot better in code.

    From JaredPar
  • If you really are doing something like:

    puts a log statement between every other line of code, to print the value of all variables all the time. It really makes the code hard to read.

    that's the problem. Consider using a test framework, instead, and write the debug code there.

    On the other hand, if you are writing just a few debug lines, then you can manage to avoid those by hands (e.g. removing the relevant lines with the editor before the commit and undoing the change after it's done) - but of course it have to be very infrequent!

    From Davide
  • I know i'm going to get negative votes for this...
    But if I were you, i'd just build my own tool.

    It'll take you a weekend, yes, but you'll keep your coding style, and your repository clean, and everyone will be happy.

    Not sure what source control you use. With mine, you can easily get a list of the things that are "pending to be checked in". And you can trigger a commit, all through an API.

    If I had that same need, i'd make a program to commit, instead of using the built-in command in the Source Control GUI. Your program would go through the list of pending things, take all the files you added/changed, make a copy of them, remove all log lines, commit, and then replace them back with your version.

    Depending on what your log lines look like, you may have to add a special comment at the end of them for your program to recognize them.

    Again, shouldn't take too much work, and it's not much of a pain to use later.
    I don't expect you'll find something that does this for you already done (and for your source control), it's pretty specific, I think.

    Troy Howard : If it's SVN this would pretty darn easy to do... Make a little perl script that removed any items wrapped with comments like // remove-before-checkin-begin to // remove-before-checkin-end (probably want to choose something shorter, and make a snippett for it in VS).
  • If the only objetive of the debugging code you are having problems with is to trace the values of some varibles I think that what you really need is a debugger. With a debugger you can watch the state of any variable in any moment.

    If you cannot use a debugger, then you can add some code to print the values in some debug output. But this code should be only a few lines whose objective has to be to make easier the fix you are doing. Once it's commited to trunk it's fixed and then you shouldn't need more those debug lines, so you must delete them. Not delete all the debug code, good debug code is very useful, delete only your "personal" tracing debug code.

    If the fix is so long that you want to save your progress commiting to the repository, then what you need is a branch, in this branch you can add so much debugging code as you want, but anyway you should remove it when merging in trunk.

  • This next suggestion is madness do not do it but you could...

    Surround your personal logging code with comments such as

    // ##LOG-START##
    logger.print("OOh A log statment");
    // ##END-LOG##
    

    And before you commit your code run a shell script that strips out your logs.

    I really wouldn't reccomend this as it's a rubbish idea, but that never stops anyone.

    Alternative you could also not add a comment at the end of every log line and have a script remove them...

    logger.print("My Innane log message"); //##LOG
    

    Personally I think that using a proper logging framework with a debug logging level etc should be good enough. And remove any superfluous logs before you submit your code.

Setting Folder permissions on Vista.

I am trying to set the permissions of a folder and all of it's children on a vista computer. The code I have so far is this.

 public static void SetPermissions(string dir)
        {
            DirectoryInfo info = new DirectoryInfo(dir);
            DirectorySecurity ds = info.GetAccessControl();            
            ds.AddAccessRule(new FileSystemAccessRule(@"BUILTIN\Users", 
                             FileSystemRights.FullControl, 
                             InheritanceFlags.ContainerInherit,
                             PropagationFlags.None, 
                             AccessControlType.Allow));

            info.SetAccessControl(ds);            
        }

However it's not working as I would expect it to.
Even if I run the code as administrator it will not set the permissions.

The folder I am working with is located in C:\ProgramData\<my folder> and I can manually change the rights on it just fine.

Any one want to point me in the right direction.

  • This may be a dumb question, but have you tried performing the same action manually (e.g. using Explorer)? Vista has some directories that not even users in the Administrators group can modify without taking additional steps. I think there are two steps you need to take first.

    First, use Explorer to make the same modification you're trying to do in your code. If it fails, troubleshoot that.

    Second, test your code on a directory you created under your own user folder. You shouldn't need admin privs to do that; the logged-in account should be able to change ACL on folders under e.g. c:\Users\yourname\documents.

    I'd also step through the code in the debugger and look at the "ds" object just before your call to SetAccessControl. That might show you something unexpected to set you on the right path.

    Erin : Yes I can create change the folder access rights manually.
    From Coderer
  • So the answer is two fold. First off a sub folder was being created before the permissions were set on the folder and I needed to or in one more flag on the permissions to make it so both folders and files inherited the permissions.

    public static void SetPermissions(string dir)
            {
                DirectoryInfo info = new DirectoryInfo(dir);
                DirectorySecurity ds = info.GetAccessControl();            
                ds.AddAccessRule(new FileSystemAccessRule(@"BUILTIN\Users", 
                                 FileSystemRights.FullControl,
                                 InheritanceFlags.ObjectInherit |
                                 InheritanceFlags.ContainerInherit,
                                 PropagationFlags.None,
                                 AccessControlType.Allow));
                info.SetAccessControl(ds);            
            }
    

    After that every thing appears to be working.

    From Erin

How do you format the body of a JMS text message?

Does everyone just use XML in the message? Are there any good alternatives to XML? If you do use XML, do you define an XML Schema so clients know how to send messages to your service?

  • We use XML, but I think the important thing is to tailor the solution to the problem. The reason we use XML is that we are basically sending an object across in the message. There's no reason it can't be plain text, if applicable for the message you are sending, using headers to send along properties if appropriate.


    We haven't defined an XSD or DTD for our XML messages, but we do have a formal document describing their composition so that other teams can use our feeds without bugging us.

  • XML, CSV, HTML, a simple word or sentence, ... Any of these are valid depending on the context in which the message is used and created. Just keep it simple and send what is needed in that context.

    It is very flexible and can be adapted to the problem space.

    From Robin
  • XML is probably the most popular along with JSON a close second - but as others have said in this thread - XML, CSV, JSON or even HTML are fine.

    XSDs are overrated really - their only real value is if you want your clients/customers to code generate marshalling code (e.g. using JAXB) or if you want to let folks use XSDs in their editors / IDE to get smart completion

    Josh Brown : I like XSDs for use in IDEs - for both smart completion and validation.

How to split a huge zip file into multiple volumes?

When I create a zip Archive via java.util.zip.*, is there a way to split the resulting archive in multiple volumes? Let's say my overall archive has a filesize of 24 MB and I want to split it into 3 files on a limit of 10 MB per file. Is there a zip API which has this feature? Or any other nice ways to achieve this?

Thanks Thollsten

  • Check: http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=38&t=004618

    I am not aware of any public API that will help you do that. (Although if you do not want to do it programatically, there are utilities like WinSplitter that will do it)

    I have not tried it but, every ZipEntry while using ZippedInput/OutputStream has a compressed size. You may get a rough estimate of the size of the zipped file while creating it. If you need 2MB of zipped files, then you can stop writing to a file after the cumulative size of entries become 1.9MB, taking .1MB for Manifest file and other zip file specific elements. So, in a nutshell, you can write a wrapper over the ZippedInputStream as follows:

    import java.util.zip.ZipOutputStream;
    import java.util.zip.ZipEntry;
    import java.io.FileOutputStream;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    
    public class ChunkedZippedOutputStream {
    
        private ZipOutputStream zipOutputStream;
    
        private String path;
        private String name;
    
        private long currentSize;
        private int currentChunkIndex;
        private final long MAX_FILE_SIZE = 16000000; // Whatever size you want
        private final String PART_POSTFIX = ".part.";
        private final String FILE_EXTENSION = ".zip";
    
        public ChunkedZippedOutputStream(String path, String name) throws FileNotFoundException {
            this.path = path;
            this.name = name;
            constructNewStream();
        }
    
        public void addEntry(ZipEntry entry) throws IOException {
            long entrySize = entry.getCompressedSize();
            if((currentSize + entrySize) > MAX_FILE_SIZE) {
                closeStream();
                constructNewStream();
            } else {
                currentSize += entrySize;
                zipOutputStream.putNextEntry(entry);
            }
        }
    
        private void closeStream() throws IOException {
            zipOutputStream.close();
        }
    
        private void constructNewStream() throws FileNotFoundException {
            zipOutputStream = new ZipOutputStream(new FileOutputStream(new File(path, constructCurrentPartName())));
            currentChunkIndex++;
            currentSize = 0;
        }
    
        private String constructCurrentPartName() {
            // This will give names is the form of <file_name>.part.0.zip, <file_name>.part.1.zip, etc.
            StringBuilder partNameBuilder = new StringBuilder(name);
            partNameBuilder.append(PART_POSTFIX);
            partNameBuilder.append(currentChunkIndex);
            partNameBuilder.append(FILE_EXTENSION);
            return partNameBuilder.toString();
        }
    }
    

    The above program is just a hint of the approach and not a final solution by any means.

    From sakana
  • Not exactly what you want, but if you can not do this in zip volumes, you may consider simply divide the file (in java) as described here.

    From VonC
  • If the goal is to have the output be compatible with pkzip and winzip, I'm not aware of any open source libraries that do this. We had a similar requirement for one of our apps, and I wound up writing our own implementation (compatible with the zip standard). If I recall, the hardest thing for us was that we had to generate the individual files on the fly (the way that most zip utilities work is they create the big zip file, then go back and split it later - that's a lot easier to implement. Took about a day to write and 2 days to debug.

    The zip standard explains what the file format has to look like. If you aren't afraid of rolling up your sleeves a bit, this is definitely doable. You do have to implement a zip file generator yourself, but you can use Java's Deflator class to generate the segment streams for the compressed data. You'll have to generate the file and section headers yourself, but they are just bytes - nothing too hard once you dive in.

    Here's the zip specification - section K has the info you are looking for specifically, but you'll need to read A, B, C and F as well. If you are dealing with really big files (We were), you'll have to get into the Zip64 stuff as well - but for 24 MB, you are fine.

    If you want to dive in and try it - if you run into questions, post back and I'll see if I can provide some pointers.

    vy32 : I'm having problems with multi-volume zip files. Specifically when a single file component is split between more than disk file. In file.zx01 I have the file header and the first part of the compressed data, then in file.zx02 I have the rest of the compressed data. But I'm not able to reassemble the files for some reason, and I'm not sure why. Do you have any experience here?
    From Kevin Day

ExternalInterface Performance: Looking for Some Best-Practices/Tips

Hi there:

I'm using Flex 3 in the UI of a Windows app (Flash player as an embedded ActiveX control), and passing data between them with ExternalInterface (primarily into the Flex app, as opposed to out). I'm finding, though, that the performance is pretty awful, particularly with larger (i.e., custom) objects; the more EI calls we make, and the larger the custom objects as pass in, the harder things seem to drop off performance-wise.

I'm assuming there's a good deal of overhead in serializing these objects, so I'm wondering, are there any best practices out there for using ExternalInterface in this particular way? There doesn't seem to be much out there in terms of documentation on this subject yet.

Is it better, say, to pass a large block of XML into the player control as a string, and parse it with Flex, than to pass it as a custom object, as a rule? How should Flex apps requiring a relatively tight integration with their host apps best use ExternalInterface without sacrificing performance? Is EI performance an issue Adobe is addressing? Any implementation differences between players 9 and 10? What kinds of things should we avoid to get the most out of this feature?

Thanks in advance!

Chris

  • Just to share the answer with anyone who might be interested, the unofficial answer from Adobe (confirmed by a few Adobe engineers at last month's MAX conference) is that marshaling, say, over 30KB or so of data over ExternalInterface is going to cause performance degradation. Little bits of data, no problem -- but larger chunks, regardless of type, etc., will slow things down considerably.

    Apparently it's a known issue, and Flash 10 doesn't seem to offer much in the way of improvements, unfortunately. So in the meantime, a workable solution is to use ExternalInterface for the little things, but to leave the heavier lifting to URLLoader and the XML or JSON parsers. Hope that helps -- if anyone runs into this one and needs more detailed guidance, feel free to contact me directly.

  • Brad Neuberg seems to have put a lot of effort to figure this one out. He has published a workaround at

    http://codinginparadise.org/weblog/2006/02/how-to-speed-up-flash-8s.html

How can I add multiple should_receive expectations on an object using RSpec?

In my Rails controller, I'm creating multiple instances of the same model class. I want to add some RSpec expectations so I can test that it is creating the correct number with the correct parameters. So, here's what I have in my spec:

Bandmate.should_receive(:create).with(:band_id => @band.id, :user_id => @user.id, :position_id => 1, :is_leader => true)
Bandmate.should_receive(:create).with(:band_id => @band.id, :user_id => "2222", :position_id => 2)
Bandmate.should_receive(:create).with(:band_id => @band.id, :user_id => "3333", :position_id => 3)
Bandmate.should_receive(:create).with(:band_id => @band.id, :user_id => "4444", :position_id => 4)

This is causing problems because it seems that the Bandmate class can only have 1 "should_receive" expectation set on it. So, when I run the example, I get the following error:

Spec::Mocks::MockExpectationError in 'BandsController should create all the bandmates when created'
Mock 'Class' expected :create with ({:band_id=>1014, :user_id=>999, :position_id=>1, :is_leader=>true}) but received it with ({:band_id=>1014, :user_id=>"2222", :position_id=>"2"})

Those are the correct parameters for the second call to create, but RSpec is testing against the wrong parameters.

Does anyone know how I can set up my should_receive expectations to allow multiple different calls?

  • Multiple expectations are not a problem at all. What you're running into are ordering problems, given your specific args on unordered expectations. Check this page for details on ordering expectations.

    The short story is that you should add .ordered to the end of each of your expectations.

    Micah : Huh, it's working now even without .ordered I guess I had a mistake in my code and blamed it on RSpec. Thanks for the tip!

VB.NET Importing Classes

Edit: This was accidentally posted twice. Original: http://stackoverflow.com/questions/243900/vb-net-importing-classes

I've seen some code where a Class is imported, instead of a namespace, making all the static members/methods of that class available. Is this a feature of VB? Or do other languages do this as well?

TestClass.vb

public class TestClass
    public shared function Somefunc() as Boolean
     return true
    end function
end class

MainClass.vb

imports TestClass

public class MainClass
    public sub Main()
     Somefunc()
    end sub
end class

These files are in the App_Code directory. Just curious, because I've never thought of doing this before, nor have I read about it anywhere.

  • By using the "HideModuleNameAttribute" you can call methods without identifying thier parent.

    Example:

    Public Class TestClassCaller
        Public Sub New()
            SomeMethod()
        End Sub
    
    End Class
    
    <HideModuleName()> _
        Public Module TestClass
        Public Sub SomeMethod()
    
        End Sub
    End Module
    
  • Imports only creates a reference to the class, it does not create an instance of it to use.

    The reason you see the function in your new class is that it's a shared function, which doesn't need the parent to have a created instance to use.

    Cheers!

    From thismat

SSAS custom group by query

My fact table looks like this

yesterday a
yesterday a
yesterday a
yesterday b
yesterday b
yesterday c
today     a
today     a
today     b
today     b
tommorow  a
tommorow  a
tommorow  c
tommorow  d

In the end I need an Excel report like this

               repetition count
               1     2     3
yesterday      1     1     1
today          0     2     0
tomorow        2     1     0

How to create a "repetion count" dimension in SSAS 2k5 ? Please keep in mind that my fact table is a liitle bit more complicated and I have more other dimension there.

My idea is to create a named query in DSV but I have some doubts if filtering will work correctly.

  • I think you would need to do a view with a GROUP BY and a count so that you load data like the following into your cube

    yesterday a 3
    yesterday b 2
    yesterday c 1
    today     a 2
    today     b 2
    tomorrow  a 2
    tomorrow  c 1
    tomorrow  d 1
    

    Then you could use the count column as the key for a repetition count dimension and you would create a measure based on the row count aggregate type.

Calculating a 2D Vector's Cross Product

From wikipedia:

the cross product is a binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which is perpendicular to the plane containing the two input vectors.

Given that the definition requires at least three dimensions, how does one calculate the cross product of two 2d vectors?

I have seen two implementations. One returns a new vector (but only accepts a single vector), the other returns a scalar (but is a calculation between two vectors).

Implementation 1 (returns a scalar):

float CrossProduct(const Vector2D & v1, const Vector2D & v2) const
{
    return (v1.X*v2.Y) - (v1.Y*v2.X);
}

Implementation 2 (returns a vector):

Vector2D CrossProduct(const Vector2D & v) const
{
    return Vector2D(v.Y, -v.X);
}

Why the varying implementations? What would I use the scalar implementation for? What would I use the vector implementation for?

The reason I ask is because I'm writing a Vector2D class myself and don't know which method to use.

  • Implementation 1 returns the magnitude of the vector that would result from a regular 3D cross product of the input vectors, taking their Z values implicitly as 0 (i.e. treating the 2D space as a plane in the 3D space). The 3D cross product will be perpendicular to that plane, and thus have 0 X & Y components (thus the scalar returned is the Z value of the 3D cross product vector).

    Implementation 2 returns a vector perpendicular to the input vector still in the same 2D plane. Not a cross product in the classical sense but consistent in the "give me a perpendicular vector" sense.

    Note that 3D euclidean space is closed under the cross product operation--that is, a cross product of two 3D vectors returns another 3D vector. Both of the above 2D implementations are inconsistent with that in one way or another.

    Hope this helps...

    Zack Mulgrew : Thanks. Your explanation makes a lot of sense.
    mattiast : Actually, implementation 2 is cross product of v and the unit vector pointing up at the z-direction.
    Drew Hall : @mattiast: True. That's exactly how the 2D 'perp' operation is described in 3D.
    From Drew Hall
  • In short: It's a shorthand notation for a mathematical hack.

    Long explanation:

    You can't do a cross-product with vectors in 2D space. The operation is not defined there.

    However, Often it is interesting to see how the cross-product of two vectors would be assuming that the 2D vectors are extended to 3D by setting the z-coordinate to zero. This is the same as working with 3D vectors on the XY-Plane.

    If you extend the vectors that way and calculate the cross-product of such an extended vector pair you'll notice that only the Z-component has a meaningfull value. X and y will always be zero.

    That's the reason why often simply the Z-component is returned as a scalar. This scalar can for example be used to find the windning of three points in 2D space.

    From a pure mathematical point of view the crossproduct in 2D space does not exist, the scalar version is the hack and a 2D cross product that returns a 2D vector makes no sense at all.

  • Another useful property of the cross product is that its magnitude is related to the sine of the angle between the two vectors:

    | a x b | = |a| . |b| . sine(theta)

    or

    sine(theta) = | a x b | / (|a| . |b|)

    So, in implementation 1 above, if a and b are known in advance to be unit vectors then the result of that function is exactly that sine() value.

    Zack Mulgrew : Thanks! This is good to know.
    From Alnitak
  • I'm using 2d cross product in my calculation to find the new correct rotation for an object that is being acted on by a force vector at an arbitrary point relative to its center of mass. (The scalar Z one.)

    Zack Mulgrew : Thanks for the example!

How to use SVN to rollout ASP.NET websites?

We use ASP.NET / C#.

We work locally, test locally, check in our code and binaries through SVN.

On our server, we checkout the latest 'build' from SVN directly into our IIS web directory.

Is this a good practice, or is there something else we should be doing for rollouts?

  • I theory there is no problem with this practise. I imagine it keeps your rollouts simple and you are able to check which revision is currently live at any given time.

    Perhaps others will raise pertinent issues but I really can't see any major reasons not to do this.

  • I do the same thing with 1 exception...I first checkout the latest build to a dev version of the site on the same server just to make sure there aren't any weird issues on the server-side. It's rare that ever are any, but it's happened before.

    Not sure if there's a better way of doing it, but it's worked well for me so far.

    From upheaval
  • Why check it out? You could easily create a script to export it (clean, no .svn directories, no mess) to the IIS directory.

    SVN supports an export feature, SVN Export

    Edit: Just noticed this has been covered before on SO: Link

    rmeador : The reason to check it out is so you can do updates instead of full redeployments. Also, you can roll it back when you mess up (which you inevitably will).
    thismat : Both of which you can do with export. Export supports revision exports, plus, you typically do not want your source in your root web, shouldn't all sites be precompiled when finished and deployed?
    thismat : I just don't see the reason to have excess clutter and hassle. A simple export keeps things clean. Only want to export one or two files? Export those two files....It's very easy to script and easier to manage than having a full blown versioned copy.
    From thismat

javascript closures and function placement

Does the placement of a function have an effect on the performance of closures within scope? If so, where is the optimal place to put these functions? If not, is the implied association by closure enough reason to place a function in another place logically?

For instance, if foo does not rely on the value of localState, does the fact that localState is accessible from foo have implications as to foo's execution time, memory use, etc.?

(function(){
    var localState;

    function foo(){
        // code
    }

    function bar(){
        // code
        return localState;
    }
})();

In other words, would this be a better choice, and if so why?

(function(){
    function foo(){
        // code
    }

    var localState;

    function bar(){
        // code
        return localState;
    }
})();

Darius Bacon has suggested below that the two samples above are identical since localState can be accessed anywhere from within the block. However, the example below where foo is defined outside the block may be a different case. What do you think?

function foo(){
    // code
}

(function(){

    var localState;

    function bar(){
        // code
        foo();
        return localState;
    }
})();
  • The scope of a var or function declaration is the whole block it appears in, regardless of where in the block the declaration is; so it'd be surprising for it to affect efficiency.

    That is, it shouldn't matter whether "function foo()" is before or after "var localState" within this block. It may matter whether "function foo()" is in this block or an enclosing one (if it can be hoisted to a higher scope because it doesn't use any local variables); that depends on details of your Javascript compiler.

  • I don't think there would be any performance overhead, as java script doesn't use the notion of function stack. It supports lexical scoping. The same state is carried forth across closure calls. On a side note, in your example you don't seem to be executing any statements!

    brad : You're right! I tried to imply some execution with the ""// code"" comments, but apparently was not very clear. Thanks for your answer.
    From questzen
  • Both those snippets are equivalent, because they're both defined in the (same) environment of the anonymous function you're creating. I think you'd be able to access localState from foo either way.

    That being said... if you have absurd amounts of variables in the environment you're creating, then foo's execution time might be affected, as variable lookups will likely take longer. If there are tons of variables that you no longer use in the function you define foo in, and foo doesn't need them either, then foo will cause them to not be garbage-collected, so that could also be an issue.

    ephemient : In fact, to see that JS does in fact close over unused variables, try this: (function(){var a=1;return function(x){eval(x)}})()("alert(a)") Clearly, the inner function makes no reference to `a`, yet it is available.
    ephemient : In contrast, in Perl, sub{my$a=1;sub{eval$_[0]}}->()('print $a'), does not print anything, but sub{my$a=1;sub{$a;eval$_[0]}}->()('$a') does: Perl cares about whether the variable is referenced to determine whether it is closed over.
    From Claudiu
  • Dog, I would hope the order of declarations would be something the JavaScript interpreters would abstract away. In any case, if there is a performance difference, it would be so minimal as to make this a poster child for the evils of premature optimization.

    Borgar : Second that. Let's not find more ways to create unreadable code.
    keparo : Word up, Andrew. Even if there were a some hypothetical performance gain, I'd rather see it written in the cleanest and most logical order.
  • Every function in Javascript is a closure. The runtime to resolve a variable's value is only incurred if the variable is referenced by the function. For instance, in this example function y captures the value of x even though x is not referenced directly by y:

    var x = 3;
    function y() eval("x");
    y();
    3
    
    From WPWoodJr
  • In your examples the difference won't really matter. Even if foo is in the global scope you won't have a problem.

    However, it's useful to keep in mind that if you use the style of assigning functions to variables to declare your functions the order in which they are declare can become quite a problem.

    For a better idea, try the following two examples:

    CheckOne();
    function CheckOne() {
        alert('check...check one.');
    }
    
    CheckTwo();
    var CheckTwo = function() {
        alert('check...check two.');
    };
    

    The only difference between the second and the first is the style they use to declare their functions. The second one generates a reference error.

    Cheers.

    From coderjoe

WordPress Plugin Development

Besides the CODEX what resources do you recommend to help a person new to creating plugins help create a WordPress plugin. I have an idea, but need a bit better explanation than what is in the CODEX to get started.

UPDATE: Is there a book that I could buy to help me out with this?

  • Having written the MyBlogLog plugin (the original one, that is) I found that the Wordpress Hooks list (can't remember the link offhand) was incredibly useful, as was the sample code from the Codex and WP Install files. Reading through other developer's plugins is also a good way to learn, as you can see how they implemented things and use those techniques to save yourself some R&D time.

    What are you looking to create, anyways?

    Edit:

    I posted a comment with this, but just in case it gets lost...

    For your specific needs, you're going to want to store data and be able to manage and retrieve it so creating a custom database table in your plugin is something you will want to do. See this codex link:

    http://codex.wordpress.org/Creating_Tables_with_Plugins

    Then you can just add your management code into the admin screens using the techniques found on this Codex page:

    http://codex.wordpress.org/Adding_Administration_Menus

    If you want to display the items on a page, you can either write yourself a custom PHP WP Page template to query the DB directly:

    http://codex.wordpress.org/Pages#Page_Templates

    Or just add a hook filter on your plugin to write the results to the page based on a keyword you specify:

    http://codex.wordpress.org/Plugin_API#Filters

    Mike Wills : I host a indie music show where I get a lot of submissions from bands. I am looking for a better way to accept and organize the songs they submit. I want to build into WP to keep it all in one place.
    Abyss Knight : Awesome! You'll probably want to use a custom MySQL table for that sort of thing. That's a bit more advanced than the overly simplistic stuff on the Codex. This page in the Codex might be useful: http://codex.wordpress.org/Creating_Tables_with_Plugins
    Mike Wills : Thank you very much. Looks like I have some reading to do...
  • Although technically still information found within the Codex, the Codex contains links to external resources. Apologies for not posting a direct link, but look again, and you should find them.

  • Here is a useful set of links on how to do Wordpress plugins. Be aware that it is relatively "advanced" (in that it introduces a number of object oriented methods to the process). You should really read the Wordpress Codex stuff first.

    From gaoshan88
  • I think one of the most important resources is the code used in other plugins.

    Not necessarily doing what yours does, but ones that use have features you want to implement.

    For example, if you want to find how to create an options page, you are going to work it out pretty quickly if you see how all the others do it.

Stop text from wrapping with NSLayoutManager

Given any arbitrary, one-line string, my goal is to render it into a bitmap representation. However, I have no means of finding out its dimensions beforehand, so I am reduced to getting the glyph range's bounding rect and resizing my canvas if it's not large enough. Unfortunately, if the canvas is not wide enough for the string, but tall enough that the text can wrap, the layout manager decides that this is best. How can I stop the layout manager from wrapping the text?

  • You should be able to use -[NSString sizeWithAttributes:] to get a bounding box for the string with a given set of drawing attributes first, then size your view accordingly and do the actual drawing of the string.

PHP: Right way to declare variable before use in loop

I have a variable that is built in loop. Something like:

$str = "";
for($i = 0; $i < 10; $i++) $str .= "something";

If $str = "" is ommitted, I get undefined variable notice, but I thought php auto-declare a variable the first time it sees undeclared one?

How do I do this right?

  • You get the undefined variable because you're concatenating the value of itself with another value.

    The equivalent of

    $str = $str . "something";

    So, it can't say what's the initial value is. It's the equivalent of this:

    $str = [undefined value] . "something";

    What's the result of a concatenation of [undefined value] and "something"? The interpreter can't say...

    So, you have to put "" in the variable first to initiate the variable's value, as you did.

    HTH

    From vIceBerg
  • It's safer to not use the auto-declare feature - that's why it issues a notice. A notice is the lowest level of warning, and won't be displayed by default. Most older PHP apps will issue lots of notices if you were to turn them on.

    From Greg
  • If you really need to make it a it cleaner you could do:

    for($i = 0, $str = ''; $i < 10; $i++) $str .= "something";
    

    But what you have is what I normally do. vlceBerg explains it well.

    vIceBerg : Just a thought... can you do this: for($i = 0, $str = ''; $i < 10; $i++, $str .= "something"); ? I don't have a PHP box right now to test....
    Jonas Due Vesterheden : Apparently yes: zsh % echo '' | php somethingsomethingsomethingsomethingsomethingsomethingsomethingsomethingsomethingsomething
    Ross : Yep vlceBerg - you don't even need the loop content. I find it easier to understand using the content but seeing as it's one line you don't need to.
    From Ross
  • PHP variables that are auto-declared are registered as being undefined which is why you're receiving the notice.

    It is generally better to declare PHP variables prior to using them though many of the lazy among us, myself included don't always do that.

List View C# stay selected

Hello ,

I have a list view that after a double click, a record opens a new form to show the details, but the record in the list view lost the "selection".... How do I know which record was clicked ???

Thanks

Maria João

  • Try setting the HideSelection property on the list view to false. It's enabled by default.

  • The listview control has a HideSelection property that defaults to True. Set this to False and the current row will remain highlighted even if the control loses focus.

    From BenR
  • Thanks very much for the quick answer i´m going to try it

  • This work well but i wan´t to ask if there are chance of change the back color ?? Thanks

    Maria João

How can I see the SQL ActiveRecord generates?

I'd like to check a few queries generated by ActiveRecord, but I don't need to actually run them. Is there a way to get at the query before it returns its result?

How to POST a FORM from HTML to ASPX page

How do I post a form from an HTML page to and ASPX page (2.0) and be able to read the values?

I currently have an ASP.NET site using the Membership provider and everything is working fine. Users can log in from the Login.aspx page.

We now want to be able to have users log in directly from another web site--which is basically a static HTML page. The users need to be able to enter their name and password on this HTML page and have it POST to my Login.aspx page (where I can then log them in manually).

Is it possible to pass form values from HTML to ASPX? I have tried everything and the Request.Form.Keys collection is always empty. I can't use a HTTP GET as these are credentials and can't be passed on a query string.

The only way I know of is an iframe.

  • Are you sure your HTML form is correct, and does, in fact, do an HTTP POST? I would suggest running Fiddler2, and then trying to log in via your Login.aspx, then the remote HTML site, and then comparing the requests that are sent to the server. For me, ASP.Net always worked fine -- if HTTP request contains a valid POST, I can get to values using Request.Form...

    From
  • This is very possible. I mocked up 3 pages which should give you a proof of concept:

    .aspx page:

    <form id="form1" runat="server">
        <div>
            <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox>
            <asp:TextBox TextMode="password" ID="TextBox2" runat="server"></asp:TextBox>
            <asp:Button ID="Button1" runat="server" Text="Button" />
        </div>
    </form>
    

    code behind:

    Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
        For Each s As String In Request.Form.AllKeys
            Response.Write(s & ": " & Request.Form(s) & "<br />")
        Next
    End Sub
    

    Separate HTML page:

    <form action="http://localhost/MyTestApp/Default.aspx" method="post">
        <input name="TextBox1" type="text" value="" id="TextBox1" />
        <input name="TextBox2" type="password" id="TextBox2" />
        <input type="submit" name="Button1" value="Button" id="Button1" />
    </form>
    

    ...and it regurgitates the form values as expected. If this isn't working, as others suggested, use a traffic analysis tool (fiddler, ethereal), because something probably isn't going where you're expecting.

  • You sure can.

    The easiest way to see how you might do this is to browse to the aspx page you want to post to. Then save the source of that page as HTML. Change the action of the form on your new html page to point back to the aspx page you originally copied it from.

    Add value tags to your form fields and put the data you want in there, then open the page and hit the submit button.

    From Flory
  • You sure can. Create an HTML page with the form in it that will contain the necessary components from the login.aspx page (i.e. username, etc), and make sure they have the same IDs. For you action, make sure it's a post.

    You might have to do some code on the login.aspx page in the Page_Load function to read the form (in the Request.Form object) and call the appropriate functions to log the user in, but other than that, you should have access to the form, and can do what you want with it.

    From mjmcinto
  • Isn't this the type of thing Jeff warned about in his article on CSRF and XSRF attacks?

    http://www.codinghorror.com/blog/archives/001175.html

  • The Request.Form.Keys collection will be empty if none of your html inputs have NAMEs. It's easy to forget to put them there after you've been doing .NET for a while. Just name them and you'll be good to go.

    Anthony : It took me ages to realise that. When you do it in .Net, the asp.net engine automatically adds the name atttribute. If you create your html form 'by hand' it's very easy to forget it and then nothing gets posted and you wonder why.
    From Chris
  • Dear Fellows, It is true that we can post form controls and can receive at aspx page as mentioned already. I have also a problem that is different from it. Question is How we can use html file upload control i.e. to get the file information and post to aspx page for further processing i.e. saving at server etc. ? I mean can we use file upload control in html page and how to get its value/data in aspx page ? Thanks in advance. Regards, Absials

    From Absials