Friday, April 15, 2011

SslStream.DataAvailable not a valid function.

I am migrating C# code from using a NetworkStream to SSLStream, however where I use stream.DataAvailable I get the error:

Error 1 'System.Net.Security.SslStream' does not contain a definition for 'DataAvailable' and no extension method 'DataAvailable' accepting a first argument of type 'System.Net.Security.SslStream' could be found (are you missing a using directive or an assembly reference?)

now my local MSDN copy does not include DataAvailable as a member of SslStream however http://msdn.microsoft.com/en-us/library/dd170317.aspx says it does have the member DataAvailable. here is a copy of my code.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Net;
using System.Net.Sockets;
using System.Net.Security;
using System.Security.Authentication;
using System.Security.Cryptography.X509Certificates;
using System.IO;

namespace Node
{

  public static class SSLCommunicator
  {
    static TcpClient client = null;
    static SslStream stream = null;
    static List<byte> networkStreamInput = new List<byte>();
    public static void connect(string server, Int32 port)
    {
        try
        {
          client = new TcpClient(server, port);
          stream = new SslStream(client.GetStream(),false);
    ...
    ...
    ...
    public static List<DataBlock> getServerInput() 
    {
      List<DataBlock> ret = new List<DataBlock>();
      try
      {
      //check to see if stream is readable.
      if (stream.CanRead)
      {
        //Check to see if there is data available.
        if (stream.DataAvailable)
        {
          byte[] readBuffer = new byte[1024];
          int numberOfBytesRead = 0;
          //while data is available buffer the data.
          do
          {
            numberOfBytesRead = stream.Read(readBuffer, 0, readBuffer.Length);
            byte[] tmp = new byte[numberOfBytesRead];
            Array.Copy(readBuffer, tmp, numberOfBytesRead);
            networkStreamInput.AddRange(tmp);
          } while (stream.DataAvailable);
     ...

Also if you have a better way to get my output of the stream in to a managed array (there will be some parsing done on it later in the code) I would love the help. I am using Visual Studio 2008

--EDIT I just realized I linked to the embedded SDK, this is not a embedded system, so how do I see if data is available in the normal .net SDK?

From stackoverflow
  • The page you are looking at is for the .NET Micro Framework.

    According to this page for .Net 2.0 and this page for .Net 3.5, there is no DataAvailable property on SSLStream.

    Edit: Can't you just call Read() and see if you get anything back? i don't think this will block.

  • I am facing the same problem. Im trying to find out a way in which my client would know that the stream of data has ended. In my application, just like yours, i am running a loop with beginread and storing all collected data in a memorystream. However, i need to know that the data has completed and then i need to send the memorystream to another class for further processing. How did you get about this problem? I will greatly appreciate your help. Thanks.

PL/SQL Trigger - Dynamically reference :NEW or :OLD

Is it possible to dynamically reference the :NEW/OLD pseudo records, or copy them?

I'm doing a audit trigger for a very wide table, so would like to avoid having separate triggers for insert/delete/update.

When updating/inserting I want to record the :NEW values in the audit table, when deleting I want to record the :OLD values.

From stackoverflow
  • Why don't you use Oracle's built in standard or fine-grained auditing?

    Matthew Watson : Standard Edition.
  • You can use a compound trigger and programmatically check if it us I/U/D.

    Compound Triggers

    Regards K

    Matthew Watson : That is what I'm doing, but I'm trying to avoid having multiple insert statements within the trigger.
    Khb : I'm not sure I follow, are you doing something like this? CREATE OR REPLACE TRIGGER TRG BEFORE INSERT OR UPDATE OR DELETE ON TBL BEGIN IF INSERTING THEN ...(record :new) ELSEIF UPDATING THEN ...(record :new) ELSEIF DELETING THEN ...(record :old) END IF; ?
    Matthew Watson : I have a single insert statement INSERT INTO HIST ( EMP_ID, NAME ) VALUES (:NEW.EMP_ID , :NEW.NAME ) ; when deleting though, I want to use :OLD , not not have a seperate insert statement for that.
  • You could try:

    declare
      l_deleting_ind varchar2(1) := case when DELETING then 'Y' end;
    begin
      insert into audit_table (col1, col2)
      values
       ( CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col1 ELSE :NEW.col1 END
       , CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col2 ELSE :NEW.col2 END
       );
    end;
    

    I found that the variable was required - you can't access DELETING directly in the insert statement.

  • WOW, You want to have only ONE insert in your trigger to avoid what?

    *"I have a single insert statement INSERT INTO HIST ( EMP_ID, NAME ) VALUES (:NEW.EMP_ID , :NEW.NAME ) ; when deleting though, I want to use :OLD , not not have a seperate insert statement for that. "*

    It's a wide table. SO? It's not like there no REPLACE in text editors, you're not going to write the Insert again, just copy, paste, select, replace :NEW with :OLD.

    Tony does have a solution but I seriously doubt that performs better than 2 inserts would perform.

    What's the big deal?


    EDIT

    the main thing I'm trying to avoid is having to managed 2 inserts when the table changes. – Matthew Watson

    I battle this attitude all the time. Those who write Java or C++ or .Net have a built-in RBO... Do this, this is good. Don't do that, that's bad. They write code according to these rules and that's fine. The problem is when these rules are applied to databases. Databases don't behave the same way code does.

    In the code world, having essentially the same code in two "places" is bad. We avoid it. One would abstract that code to a function and call it from the two places and thus avoid maintaining it twice, and possibly missing one, etc. We all know the drill.

    In this case, while it's true that in the end I recommend two inserts, they are separated by an ELSE. You won't change one and forget the other one. IT'S Right There. It's not in a different package, or in some compiled code, or even somewhere else in the same trigger. They're right beside each other, there's an ELSE and the Insert is repeated with :NEW, instead of :OLD. Why am I so crazed about this? Does it really make a difference here? I know two inserts won't be worse than other ideas, and it could be better.

    The real reason is being prepared for the times when it does matter. If you're avoiding two inserts just for the sake of maintenance, you're going to miss the times when this makes a HUGE difference.

    INSERT INTO log
    SELECT * FROM myTable 
    WHERE flag = 'TRUE'
    
    ELSE                          -- column omitted for clarity
    
    INSERT INTO log
    SELECT * FROM myTable 
    WHERE flag = 'FALSE'
    

    Some, including Matthew, would say this is bad code, there are two inserts. I could easily replace 'TRUE' and 'FALSE' with a bind variable and flip it at will. And that's what most people would do. But if True is .1% of the values and 99.9% is False, you want two inserts, because you want two execution plans. One is better off with an index and the other an FTS. So, yes, you do have two Inserts to maintain. That's not always bad and in this case it's good and desirable.

    Matthew Watson : the main thing I'm trying to avoid is having to managed 2 inserts when the table changes.
  • Use a compound trigger, as others have suggested. Save the old or new values, as appropriate, to variables, and use the variables in your insert statement:

    declare
      v_col1  table_name.col1%type;
      v_col2  table_name.col2%type;
    begin
      if deleting then
        v_col1 := :old.col1;
        v_col2 := :old.col2;
      else
        v_col1 := :new.col1;
        v_col2 := :new.col2;
      end if;
    
      insert into audit_table(col1, col2)
      values(v_col1, v_col2);
    end;
    
    Matthew Watson : mm, yeh, was hoping to be able to just copy the record. oh well. thanks

Quick question on reflection in C#

I am getting started with the notion of test-driven development, and kind of failing since I am finding that I know what the test is going to be kind of, but I can't figure out how to get it to do what I want. What I have is a property that has a public getter and an internal setter. I'd like to test the functionality by accessing the internal setter from the unit test, but I can't figure out just how to do it. Here is the test:

 [Test()]
 public void HandleInput() {
  _requestType = _request.GetType();
  PropertyInfo propStdin =
   _requestType.GetProperty("StandardInput", BindingFlags.Public | BindingFlags.NonPublic);
  if(propStdin == null) {
   // Bug in the test.
   throw new Exception("There is a bug in the test. Reflection of stdin property returned null.");
  }
  MethodInfo setStdin = propStdin.GetSetMethod();

  // This will fail at the moment since nothing is here to make this happen.
  Assert.AreEqual("NewInputNewRequestInput", _request.StandardInput);
 }

Now, the problem is that when I run the test, I get:

[mono-2.4] mbt@zest:~/Projects/StaffASAP/Test.FastCGI/bin/Debug$ nunit-console2 Test.FastCGI.dll
NUnit version 2.4.8
Copyright (C) 2002-2007 Charlie Poole.
Copyright (C) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A. Vorontsov.
Copyright (C) 2000-2002 Philip Craig.
All Rights Reserved.

Runtime Environment - 
   OS Version: Unix 2.6.29.0
  CLR Version: 2.0.50727.1433 ( Mono 2.4 )

..F
Tests run: 2, Failures: 1, Not run: 0, Time: 0.111 seconds

Test Case Failures:
1) Test.FastCGI.tRequest.HandleInput : System.Exception : There is a bug in the test. Reflection of stdin property returned null.
at Test.FastCGI.tRequest.HandleInput () [0x00051] in /home/mbt/Projects/StaffASAP/Test.FastCGI/tRequest.cs:54
at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00057] in /media/disk/mono-2.4/sources/mono-2.4/mcs/class/corlib/System.Reflection/MonoMethod.cs:159

So, I must be attempting to access the property incorrectly, but from looking at the documentation, I don't know what I am doing wrong. What am I doing wrong?

From stackoverflow
  • You may want to look at the InternalsVisibleTo assembly attribute:

    [Y]ou can expose an assembly's internal methods/properties/classes to a "friend" assembly. Normally internals are only accessible to members of the same assembly, and are often used to hide "plumbing" methods and utilities classes.

  • Here is a similar article on SO that deals with Unit tests, internal members and reflection..

  • Are you missing | BindingFlags.Instance in the GetProperty(...) call?

    It would be much nicer however to expose the internal variables to the test program through the InternalsVisibleTo attribute, as you don't need to rely on reflection, and refactoring will propagate if the test project is part of your solution in Visual Studio.

    Michael Trausch : Thanks for the additional pointer to BindingFlags.Instance. That works, as does the InternalsVisibleTo attribute. I've gone with using the InternalsVisibleTo attribute since it makes the tests easier to write and more readable, but the other info is great, too!
    Cecil Has a Name : While reflection is interesting and powerful, you'll find it is picky about the details.

SQL JOIN: ON vs Equals

Is there any significant difference between the following?

SELECT a.name, b.name FROM a, b WHERE a.id = b.id AND a.id = 1

AND

SELECT a.name, b.name FROM a INNER JOIN b ON a.id = b.id WHERE a.id = 1

Do SO users have a preference of one over the other?

From stackoverflow
  • No difference. I find the first format more readable and use the second format only when doing other types of joins (OUTER, LEFT INNER, etc).

  • There is no difference, but the readability of the second is much better when you have a big multi-join query with extra where clauses for filtering.
    Separating the join clauses and the filter clauses is a Good Thing :)

    Joel Coehoorn : There's no difference for this query, but for other queries there are some things you can doe with 'INNER JOIN' style syntax that you can't do with the a,b style syntax.
    Lars Mæhlum : Joel: Very true :)
  • The former is ANSI 89 syntax, the latter is ANSI 92.

    For that specific query there is no difference. However, with the former you lose the ability to separate a filter from a join condition in complex queries, and the syntax to specify LEFT vs RIGHT vs INNER is often confusing, especially if you have to go back and forth between different db vendors. I don't like the older syntax at all.

  • The second form is SQL92 compliant syntax. This should mean that it is supported by all current and future databases vendors. However , the truth is that the first form is so pervasive that it is also guaranteed to be around for longer than we care.

    Otherwise they are same in all respects in how databases treat the two.

  • There is no difference to the sql query engine.

    For readability, the latter is much easier to read if you use linebreaks and indentation.

    For INNER JOINs, it does not matter if you put "filters" and "joins" in ON or WHERE clause, the query optimizer should decide what to do first anyway (it may chose to do a filter first, a join later, or vice versa

    For OUTER JOINs however, there is a difference, and sometimes youll want to put the condition in the ON clause, sometimes in the WHERE. Putting a condition in the WHERE clause for an OUTER JOIN can turn it into an INNER JOIN (because of how NULLs work)

    For example, check the readability between the two following samples:

    SELECT c.customer_no, o.order_no, a.article_no, r.price
    FROM customer c, order o, orderrow r, article a
    WHERE o.customer_id = c.customer_id
    AND r.order_id = o.order_id
    AND a.article_id = r.article_id
    AND o.orderdate >= '2003-01-01'
    AND o.orderdate < '2004-01-01'
    AND c.customer_name LIKE 'A%'
    ORDER BY r.price DESC
    

    vs

    SELECT c.customer_no, o.order_no, a.article_no, r.price
    FROM customer c 
    INNER JOIN order o
       ON  o.customer_id = c.customer_id
       AND o.orderdate >= '2003-01-01'
       AND o.orderdate < '2004-01-01'
    INNER JOIN orderrow r
       ON  r.order_id = o.order_id
    INNER JOIN article a 
       ON  a.article_id = r.article_id
    WHERE  c.customer_name LIKE 'A%'
    ORDER BY r.price DESC
    
    Joel Coehoorn : Putting a condition in a where clause vs a join clause most certainly does matter for outer joins.
    Cybis : Why was this downvoted? I'm voting it back up. Brimstedt was simply showing that the second version is easier to read (and I agree with him). Joins and filters are, conceptually, two different things.
    Brimstedt : ah, yes for outer joins it makes a difference.. thanks for pointing it out, Ill edit my answer for completeness
    Brimstedt : Not that i care much about the downvote, but to the query engine, it does not matter. To humans it does, and I find it more readable. :-)
  • Whilst you can perform most tasks using both and in your case there is no difference whatsoever, I will always use the second at all times.

    1. It's the current supported standard
    2. It keeps joins in the FROM clause and filters in the WHERE clause
    3. It makes more complex LEFT, RIGHT, FULL OUTER joins much easier
    4. MSSQL Help is all based around that syntax therefore much easier to get help on your problem queries
  • While there is no difference technically, you need to be extra careful about doing joins using the first method. If you get it wrong by accident, you could end up doing a cartesian join between your a and b tables (a very long, memory & cpu intensive query - it will match each single row in a with all rows in b. Bad if a and b are large tables to begin with). Using an explicit INNER JOIN is both safer and easier to read.

Converting webpages from UTF-8 to ISO-8859-1 in linux

Anyone have a neat trick on how to convert a number of php and html files from UTF-8 to ISO-8859-1 in linux (Ubuntu)?

From stackoverflow
  • I think iconv is your answer...

    Form man iconv:

      NAME
          iconv - Convert encoding of given files from one encoding to another
    
      SYNOPSIS
          iconv -f encoding -t encoding inputfile
    
      DESCRIPTION
          The iconv program converts the encoding of characters in inputfile from one coded 
          character set to another. The result is written to standard output unless otherwise 
          specified by the --output option.
    
          .....
    

    So you could probably do a

    find $my_base_dir -name "*.php" -o -name "*.html" -exec sh -c "( \
       iconv -t ISO88592 -f UTF8 {} -o {}.iconv ; \
       mv {}.iconv {} ; \
    )" \;
    

    This will recursively find the appropriately named files and re-encode them (the temporary file is necessary, as iconv will truncate output before starting to work).

  • Ubuntu has recode

    $ sudo apt-get install recode
    $ recode UTF-8..latin1 *.php
    

    Recursively, thanks to Ted Dziuba:

    $ find . -name "*.php" -exec recode UTF-8..latin1 {} \;
    
    David Zaslavsky : recode is a fairly standard Linux program - not so standard that it's always installed by default, but it should be available on all distributions, not just Ubuntu.
    Svish : how can I do this recursive?
    Ted Dziuba : Recursively, it's find . -name "*.php" -exec recode UTF-8..latin1 {}\;
    Luiz Damim : +1 Found your answer while searching google for this conversion. It saved my day :)

How to include an html page into an html page

Hi friends,

I want to include an html page inside an html page. Is it possible? Please give me the code. I dont want to do it in PHP, I know that in PHP, we can use include for this situation, how can I achieve the same purely in html without using the iframe and frame concept?

From stackoverflow
  • <iframe src="page.html"></iframe>
    

    You will need to add some styling to this iframe. You can specify width, height, and if you want it to look like a part of the original page include fameborder="0".

    There is no other way to do it in pure HTML. This is what they were built for, it's like saying I want to fry an egg without an egg.

  • If you're just trying to stick in your own HTML from another file, and you consider a Server Side Include to be "pure HTML" (because it kind of looks like an HTML comment and isn't using something "dirty" like PHP):

    <!--#include virtual="/footer.html" -->
    
    Sam152 : This isnt a pure HTML solution is it?
    Daniel LeCheminant : @Sam152: Shhh! ;-]
    Sam152 : Yeah +1. Its probably the better way of doing it. He also says that he doesn't want to use an iframe, so who knows which solution will work best.
    praveenjayapal : Hey, here i got a idea, i have included the whole content inside the javascript - document.write. then place the javascript file inside the html. It working
  • If you mean client side then you will have to use JavaScript or frames.
    Simple way to start, try jQuery

    $("#links").load("/Main_Page #jq-p-Getting-Started li");
    

    More at jQuery Docs

    If you want to use IFrames then start with Wikipedia on IFrames

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
       "http://www.w3.org/TR/html4/loose.dtd">
    <html>
      <head>
            <title>Example</title>
      </head>
      <body>
            The material below comes from the website http://example.com/
            <iframe src="http://example.com/" height="200">
                Alternative text for browsers that do not understand IFrames.
            </iframe>
       </body>
    </html>
    
    praveenjayapal : Hey, here i got a idea, i have included the whole content inside the javascript - document.write. then place the javascript file inside the html. It working
  • You can use an object element-

    <object type="text/html" data="urltofile.html"></object>
    

    Or, on your local server, Ajax can return a string of html (responseText) that you can use to document.write a new window, or edit out the head and body tags and add the rest to a div or other block element in the current page.

  • @PraveenJayapal :: How to include the HTML content in a JAVASCRIPT file?? And how to place it inside the HTML page?

What is the best technique for consistent form, function between all web browsers (including Google Chrome)?

Short version: What is the cleanest and most maintainable technique for consistant presentation and AJAX function across all browsers used by both web developers and web developers' end-users?

  • IE 6, 7, 8
  • Firefox 2, 3
  • Safari
  • Google Chrome
  • Opera

Long version: I wrote a web app aimed at other web developers. I want my app to support the major web browsers (plus Google Chrome) in both presentation and AJAX behavior.

I began on Firefox/Firebug, then added conditional comments for a consistent styling under IE 6 and 7. Next, to my amazement, I discovered that jQuery does not behave identically in IE; so I changed my Javascript to be portable on FF and IE using conditionals and less pure jQuery.

Today, I started testing on Webkit and Google Chrome and discovered that, not only are the styles inconsistant with both FF and IE, but Javascript is not executing at all, probably due to a syntax or parse error. I expected some CSS work, but now I have more Javascript debugging to do! At this point, I want to step back and think before writing piles of special cases for all situations.

I am not looking for a silver bullet, just best practices to keep things as understandable and maintainable as possible. I prefer if this works with no server-side intelligence; however if there is a advantage to, for example, check the user-agent and then return different files to different browsers, that is fine if the total comprehensibility and maintainability of the web app is lower. Thank you all very much!

From stackoverflow
  • Just so you've got one less browser to worry about, Chrome uses the same rendering engine as Safari. So if it works in Safari, it should work exactly the same in Chrome.

    See this post on Matt Cutts' blog.

    Google Chrome uses WebKit for rendering, which is the same rendering engine as Apple’s Safari browser, so if your site is compatible with Safari it should work great in Chrome.

    Update: Looks like this is now out-dated info. Please see Vox's comment on this answer.

    VoxPelli : Chrome and Safari are using different javascript engines so it might differ and in fact I've also seen some rendering inconsistencies - perhaps originating from the fact that Safari and Chrome probably are running different builds of Webkit
  • For UI, check out Ext.

    It's great as a standalone library, though it can also be used with jQuery, YUI, Prototype and GWT.

    Samples: http://extjs.com/deploy/dev/examples/samples.html

  • Chrome is actually a little different to Safari, it uses a completely different javascript implementation and problems have been reported with both prototype and jquery. I wouldn't worry about it too much for now, it's still an early beta version of the browser and such inconsistencies will probably be treated as bugs. Here's the bug report.

  • I am in a similar situation, working on a web app that is targeted at IT professionals, and required to support the same set of browsers, minus Opera.

    Some general things I've learned so far:

    • Test often, in as many of your target browsers as you can. Make sure you have time for this in your development schedule.
    • Toolkits can get you part of the way to cross-browser support, but will eventually miss something on some browser. Plan some time for debugging and researching fixes for specific browsers.
    • If you need something that's not in a toolkit and can't find a free code snippet, invest some time to write utility functions that encapsulate the browser-dependent behavior.
    • Educate yourself about known browser bugs, so that you can steer your implementation around them.

    A few more-specific things I've learned:

    • Use conditional code based on the user-agent only as a last resort, because different generations of the "same" browser may have different features. Instead, test for standards-compliant behavior first — e.g., if(node.addEventListener)..., then common non-standard functions — e.g., if(window.attachEvent)..., and then, if you must, look at the user-agent for a specific browser type & version number.
    • Knowing when the DOM is 'ready' for script access is different in just about every browser. A good toolkit will abstract this for you.
    • Event handlers are different in just about every browser. A good toolkit will abstract this for you.
    • Creating DOM elements, particularly form controls or elements with attributes, can be tricky with document.createElement and element.setAttribute. While not standard (and kinda yucky), using node.innerHTML with strings that contain bits of HTML seems to be more reliable across browser types. I have yet to find a toolkit that will let you use element.setAttribute to add a 'name' to a form element in IE.
    • CSS differences (and bugs) are just as important as JS differences.
    • The 'core' Javascript features (String, Date, RegExp, Array functions) seem to be pretty reliable and consistent across browsers, especially relative to the DOM/CSS/Window functions. There's some small joy in the fact that the language isn't entirely different on every platform. :-)

    I haven't really run into any Chrome-specific JS bugs, but it's always one of the first browsers I test.

    HTH

  • One "silver bullet" you may consider turning to is Google Web Toolkit (GWT).

    I believe it supports all the browsers you are interested in, and gives you the ability to code your UI in a Java-compatible IDE such as Eclipse. The advantage of this is you can use IDE tools for code completion and compile-time error checking, which greatly improves development on large-scale UI projects.

    If you use GWT UI components, it will hide a lot of browser-specific nastiness from having to be dealt with, but when you compile, will create a compact, deploy file for each browser platform. This way you never download any IE-specific code if you are viewing the app in Firefox. You will also have a client-side stub generated which will load the appropriate compiled bundle of JS. To sweeten the deal, these files are cacheable, so perceived performance is generally improved for returning visitors.

    system PAUSE : Thanks! It looks nifty, but it would be difficult to migrate my existing JavaScript codebase (with jQuery/YUI/ie7-js/etc) to a purely Java codebase, esp. without much Java expertise on the team. But nice to find that Java/J2EE is not required on the server, and that IE6 is supported.
  • If you're starting from a base reset or framework and have accounted for IE and it's still all freaky, you may want to recheck the following:

    • Everything validates? CSS and HTML?
    • Any broken links to an included file (js, css, etc?). In Chrome/Safari, if your stylesheet link is busted, all of your links might end up red. (something to do with the default 404 styling I think)
    • Any odd requirements of your js plugins that you might be using? (does the css file have to come before the js file, like with jquery.thickbox?)
  • The landscape has evolved considerably to accommodate cross-browser development. jQuery, Prototype and other frameworks exist for cross-browser Javascript. CSS resets are good for starting on a common blank canvas for all browsers. BluePrint and 960 are both CSS frameworks to help with layouts using CSS grid layouts techniques that seems to be getting very popular these days.

    As for other CSS quirks across the different browsers, there is no holy grail here and the only option is to test you website across different browsers and use this awesome resource and definitely join a mailing list to save up soem time.

    If you are working on high volume production site then you can use a service like browsercam.com in the end game to ensure the site doesn't break horribly in some browser.

    Lastly, don't try to make the site look the same in every browser. Your primary design should target IE/FF and you should be okay with reasonable compromises on others. Use the graded browser chart to narrow in on browser support.

    As for best practices, starting using wireframes on blank paper or a service like Balsamiq mockups. I am still surprised how many developers start with an editor instead of a wireframe but then again I only switched a year back before realizing how big a time saver it is. Have clean seperation of layout (HTML), presentation (CSS) and behaviors (Javascript). There should be no styling elements in HTML, no presenation in Javascript (use .addClass('highlight') instead of .css({'background-color': 'red'});).

    If you are not familiar with any of the bold terms in this post, Googling them should be fruitful for your web development career and productivity.

  • If your very top priority is exactly consistent presentation on all the browsers listed with no disparities, you should probably be looking at AS3 and Flex.

  • Personally, I use Mootools as a simple lightweight javascript framework. It is simple to use and supports all the browsers above (except Chrome, but that seems to work too as far as I can tell).

    Also, to ensure consistency across the browsers, I get a feature/style/behaviour/etc. to work in one browser first (usually Firefox 3 with firebug), then immediately check to make sure it works in all the other browsers (leaving IE6 for last). If it doesn't, I inveset the time to fix it right away, because otherwise I know I won't have time later (in my experience, getting things to work cross-browser takes about 50% of my dev. time ;-) )

  • I've found four things helpful in developing JavaScript applications:

    • Feature detection
    • Libraries
    • Iterative Development using Virtualization
    • JavaScript: The Definitive Guide, Douglas Crockford & John Resig

    Feature Detection

    Use reflection to ask if the browser supports the desired feature. If you want to know what event handling a browser supports, you can if(el.addEventHandler) for W3C compliance, if(el.attachEvent) for the IE-type, and finally fall back on el.['onSomeEvent'].

    ONE BIG BUT!

    Browsers sometimes lie about what features they support. I can't remember, but I ran into an issues where Firefox implemented a DOM feature, but would return false if you tested for that feature!

    Libraries

    Since you're already working with jQuery, I'll save the explanation. But if you're running into problems you may want to consider YUI for it's wonderful cross-browser compatibility. They even work together.

    Iterative Development with Virtualization

    Perhaps my best advice: Run all your test environment's at once. Get a Linux distro, Compiz Fusion and a bunch of RAM. Download a copy of either VMWare's VMWare Server or Sun's Virtual Box and install a few operating systems. Get images for Windows XP, Windows Vista and Mac OS X.

    The basic idea is this: Compiz Fusion gives you 4 Desktops mapped onto a Cube. 1 of these desktops is your Linux computer, the next your Virtutual Windows XP box, the one after that Vista, the last Mac OS X. After writing some code, you alt-tab into virtual computer and check out your work. Plus it looks awesome.

    JavaScript: The Definitive Guide, Douglas Crockford & John Resig

    These three sources provide most of my information for JavaScript development. The Definitive guide is perhaps the best reference book for JavaScript.

    Douglas Crockford is a JavaScript guru (I hate the word) at Yahoo. Lookup his series "Douglas Crockford Theory of the DOM", "Douglas Crockford Advanced JavaScript", "Douglas Crockford Theory of the Dom", and ""Douglas Crockford The Good Parts" on Yahoo! Videos.

    John Resig (as you know) wrote jQuery. His website at ejohn.org contains a wealth of JavaScript information, and if you dig around on Google you'll find he's given a number of presentations on defensive JavaScript techniques.

    ... Good luck!

    jhs : rooney, thank you for your advice. You imply that the troublemaker is Javascript, not so much HTML/CSS--a good point. Virtualization is an interesting solution. I've been using EC2 recently for throwaway test work. Maybe it's time for a RAM upgrade :)
  • Validating your javascript with a "good parts" + browser on JsLint.com makes it less likely to have JavaScripts behaving differently in FF, Safari etc.

    Otherwise - using standards and validating your code as well as building on existing techniques like jQuery should make your site behave the same in all browsers except IE - and there's no magic recipe for IE - it's just bugs everywhere...

Trouble setting up witness in SQL Server mirroring scheme w/ error

I've got a trio of Windows servers (data1, data2 and datawitness) that aren't part of any domain and don't use AD. I'm trying to set up mirroring based on the instructions at http://alan328.com/SQL2005_Database_Mirroring_Tutorial.aspx. I've had success right up until the final set of instructions where I tell data1 to use datawitness as the witness server. That step fails with the following message:

alter database MyDatabase set witness = 'TCP://datawitness.somedomain.com:7024'

The ALTER DATABASE command could not be sent to the remote server instance 'TCP://datawitness.somedomain.com:7024'. The database mirroring configuration was not changed. Verify that the server is connected, and try again.

I've tested both port 7024 as well as 1433 using telnet and both servers can indeed connect with each other. I'm also able to add a connection to the witness server from SQL Server Manager on the primary server. I've used the Configuration Manager on both servers to enabled Named Pipes and verify that IP traffic is enabled and using port 1433 by default.

What else could it be? Do I need any additional ports open for this to work? (The firewall rules are very restrictive, but I know traffic on the previously mentioned ports is explicitly allowed)

Caveats that are worth mentioning here:

  • Each server is in a different network segment

  • The servers don't use AD and aren't part of a domain

  • There is no DNS server configured for these servers, so I'm using the HOSTS file to map domain names to IP addresses (verified using telnet, ping, etc).

  • The firewall rules are very restrictive and I don't have direct access to tweak them, though I can call in a change if needed

  • Data1 and Data2 are using SQL Server 2008, Datawitness is using SQL Express 2005. All of them use the default instance (i.e. none of them are named instances)

From stackoverflow
  • After combing through blogs and KB articles and forum posts and reinstalling and reconfiguring and rebooting and profiling, etc, etc, etc, I finally found the key to the puzzle - an entry in the event log on the witness server reported this error:

    Database mirroring connection error 2 'DNS lookup failed with error: '11001(No such host is known.)'.' for 'TCP://ABC-WEB01:7024'.
    

    I had used a hosts file to map mock domain names for all three servers in the form of datax.mydomain.com. However, it is now apparent that the witness was trying to comunicate back using the name of the primary server, which I did not have a hosts entry for. Simply adding another entry for ABC-WEB01 pointing to the primary web server did the trick. No errors and the mirroring is finally complete.

    Hope this saves someone else a billion hours.

Limit an html form input to a certain float range

Is there a way to limit a form input field to be between certain number range, say (0,100)

I'm filtering the input in the onkeydown event, to accept only numbers, the problem is I want to reject a number if that number would make the input to go out of range

So I need a way to see if the current value of the input plus the key the user is pressing will sum up between the range.

I tried using:

if((parseFloat(this.value) + parseFloat(String.fromCharCode(e.keyCode)) > 100){
    return false;
}

the thing is e.keyCode can return different codes for the same number, right now is returning 57 for the number 9, but 105 if i press the number on the numpad.

Is there a way to accomplish this?

From stackoverflow
  • Personally, I would just check it when the field loses focus (or when the form is submitted). Popping up errors as the user is typing (or preventing their keystrokes from registering in the field) is usually just going to annoy them.

    And of course you probably knew this already, but make sure you check the value on the server side after the form is submitted as well. Never rely on javascript validation!

  • Adding the current value plus the float value of the character typed is not what you want. Think about if the current value is 99.0 and the user types a "5", the actual value is 99.05 but your expression would evaluate to 104.0. You need to append the key character to the current value before parsing anything into a float.

    As for the key code, here is a reference to the javascript key codes. Using that you could write your own function like this:

    function fromKeyCode(code) {
      var asciiCode = code;
      if (code > 95 && code < 106) {
        asciiCode -= 48;
      }
      return String.fromCharCode(asciiCode);
    }
    
    Cesar : You are absolutely right! But my problem remains. I should have asked how to get a number from the corresponding keyCode
    CodeMonkey1 : Added solution to your keyCode problem.
    Cesar : Thanks! That solved my question!
  • var total = new Number(20.00);
    
    alert(total.toFixed(2));
    

    That will allow you to set a fixed width on the precision of 2 decimal places. In this case I am making sure with a js required field check that money only has 2 spots after the 2.

    I'm not sure if I understand your question fully, but check the Number() methods, there has to be something there to help you.

  • Trying to anticipate what the resulting value is going to be is harder than you think. Remember the user might be pressing backspace, or the cursor might not be at the end of the field, or the user might have part of the value selected, to be replaced on next keypress, and so on. It's also possible to manipulate the text field through mouse operations you won't get any say in.

    The traditional approach is to put your validation on the ‘keyup’ event instead of ‘keypress’. Then you get the full, post-change value of the field. You don't get the chance to deny the keypress, but you can reset the field to the last-known-good value instead.

    But either way it's best not to try to constrain input too tightly, because this can make it terribly difficult to type. For example, “12.” is an invalid number you might want to deny... but if you did, it would become very difficult to type “12.3”! Better to allow any input, but signal when the current input is out of bounds, by some mechanism (eg. turning the text red is common).

    Cesar : Yes you're right, While testing I've found that edge cases that you mention. I guess I'll go for the keyup event instead! Very thanks!
  • You can catch the input on keyup, after the value contains the new input.

    Then just look at the value-

    inputelement.onkeyup= function(e){
        e= window.event? event.srcElement: e.target;
        var val= parseFloat(e.value) || 0;
        e.value= Math.max(0, Math.min(100, val));
    }
    

C++ Native Way to Pack and Unpack String

Following my earlier question. Is there a way to write a string in a compressed/bit version using C++ native idiom. I am thinking something like Perl's native pack and unpack.

From stackoverflow
  • If nothing else, I believe that you could always fold in perl headers and libraries. See perlguts.

    Leon Timmermans : Actually, I did just that for my libperl++. Unless I'm already embedding perl, I would opt for a different solution tough.
    Axeman : I've used guts for a bit of Inline::C programming--and I was gratified at how easy it was to use the *C* part of perl (not for calling subs though) and get the same use out of perl hashes and perl lists without a whole lot of additional coding. I was doing mainly C-type work, with perl guts.
  • Based on reading your previous question, I think you mean to say that you want a binary encoded output, rather than a "compressed" output. Generally, "compressed" is used to refer specifically to data that has been reduced in size through the application of an algorithm such as LZW encoding. In your case, you may find that the output is "compressed" in the sense that it is smaller because for a wide variety of numbers a binary representation is more efficient than an ASCII representation, but this is not "compression" in the standard sense, which may be why you are having trouble getting the answer you are looking for.

    I think you are really asking the following:

    Given a number in ASCII format (stored in a std::string, for example), how can I write this to a file as a binary encoding integer?

    There are two parts to the answer. First, you must convert the ASCII encoded string to an integer value. You may use a function such as strtol, which will return a long integer equivalent in value to your ASCII encoded number. Do be aware that there are limitations on the magnitude of the number that may be represented in a long integer, so if your numbers are very, very large, you may need to be more creative in translating them.

    Second, you must write the data to the output stream using ostream::write(), which does not attempt to format the bytes you give it. If you simply use the default operator<<() stream operation to write the values, you'll find that your numbers just get translated back to ASCII and written out that way. Put this all together like this:

    #include <stdlib.h>        // For strtol().
    #include <arpa/inet.h>     // For htonl().
    #include <fstream>         // For fstream.
    #include <string>          // For string.
    
    int main(int argc, char *argv[]) {
        char *dummy = 0;
        std::string value("12345");
    
        // Use strtol to convert to an int; "10" here means the string is 
        // in decimal, as opposed to, eg, hexadecimal or octol, etc.
    
        long intValue = strtol(value.c_str(), &dummy, 10);
    
        // Convert the value to "network order"; not strictly necessary, 
        // but it is good hygiene.  Note that if you do this, you will 
        // have to convert back to "host order" with ntohl() when you read 
        // the data back.
    
        uint32_t netValue = htonl(intValue);
    
        // Create an output stream; make sure to open the file in binary mode.
    
        std::fstream output;
        output.open("out.dat", std::fstream::out | std::fstream::binary);
    
        // Write out the data using fstream::write(), not operator<<()!
    
        output.write(reinterpret_cast<char *>(&netValue), sizeof(netValue));
        output.close();
    }
    

Problem in PHP mail

Dear All I am sending Reply Mail using PHP Those Who sended Mail to me. My Problem is When I send email It sits in SPAM folder .What to do inorder to deliver mail correctly.Any Idea or any change procedure? My Code.

<?php

$email_id="welcome@gmail.com";
$recipient = "@gmail.com"; //recipient
$mail_body = $message; //mail body
$subject = "Subject ".$Name.""; //subject
$header = "From: ". $Name . " <" . $email . ">\r\n"; //optional headerfields
$header .='Content-type: text/plain; charset=utf-8'."\r\n";
mail($recipient, $subject, $mail_body, $header);/mail command :)

?>
From stackoverflow
  • Make sure you're populating the From, Reply-To, Sender, Return-Path, and Errors-To headers with the sending e-mail address. There are so many reasons e-mails may be filtered as spam, though - your ISP may be blocked, the contents of the message may contain things that get it flagged, etc.

  • The problem is not necessarily in your code. One possibility is that your server's mail transfer agent is misconfigured - I've experienced this issue once. Worth checking.

  • The problem is not coming from your code. You may need to configure your service. In order to be accepted by most of the Email service providers you should setup a DomainKey or a Sender Id.

    You should also make sure that your ip is not blacklisted if you are running this code on a Dedicated Server.

Given the names of the types as strings, how would I construct a generic using reflection?

Presuming I have the strings "List" and "Socket," how would I go about creating a List<Socket>?

The answer I need will work just as well for Queue and XmlNodeList, not to mention MyCustomGeneric with MyCustomClass.

From stackoverflow

Why does adding the @ symbol make this work?

I am working with asp.net mvc and creating a form. I want to add a class attribute to the form tag.

I found an example here of adding a enctype attribute and tried to swap out with class. I got a compile error when accessing the view.

I then found an example of someone adding a @ symbol to the beginning of the property name and that worked. Great that it works, but I am one that needs to know why and a quick Google search was not helpful. I understand that C# allows one to prepend the @ to a string to ignore escaping chars. Why does it work in this case? What does the @ tell the compiler?

Code that produces a compile error?

 <% Html.BeginForm("Results", "Search", 
    FormMethod.Get, new{class="search_form"}); %>

Code that does work:

 <% Html.BeginForm("Results", "Search", 
    FormMethod.Get, new{@class="search_form"}); %>
From stackoverflow
  • In C#, 'class' is a reserved keyword - adding an '@' symbol to the front of a reserved keyword allows you to use the keyword as an identifier.

    Here's an example straight out of the C# spec:

    class @class {
        public static void @static(bool @bool) {
            if (@bool) System.Console.WriteLine("true");
            else System.Console.WriteLine("false");
        }
    }
    

    Note: this is of course an example, and not recommended practice.

    GrillerGeek : I guess that would make sense. I was looking way too deep to find the answer to this one.
    Erik Forbes : I know how you feel. =)
    Daniel Brückner : Nice! I was not aware of that ... +1
    Michael Meadows : example code is way scary! class @class, bool @bool, switch (@switch). Fun to be had with class @struct, or bool @int... It make this possible: public static @static @static;
    bendewey : +1 learning things on SO.
    Erik Forbes : Yeah - definitely a readability nightmare if taken to the extreme like the example.

Using Reflection and Collections to Illiterate a Object Rational Database

Hi All, Have C# question;

I've been searching for a way to lets say dynamic created objects at runtime by iterating though them and there properties/methods. so far i've played with using Reflections and foreach to illiterate the class properties. This is to display all records in various tables on a Data Grid view. However one approach I think is to use Object-rational mapping to connect, populate a dataset and then be able to display it on the Data Grid view for editing etc..

*Edit: Maybe iterating over the actually namespace of classes, all the classes inherit a class called Table so I guess there should be a method in invoking all classes at run-time to fetch the data from the database*

I've done mostly Java programming up until now so have a good idea of OO-programming just not so familiar with .NET so looking for the best approach in this.

Thanks in Advance

From stackoverflow
  • There are object-relational libraries available in .NET. If you're targeting SQL Server, use Linq to SQL, or if you need cross-database support, try the Entity Framework.

    Darragh : Have been trying this, but the classes that define each table have no Enumerable, they are over a 1000 or so class, and it wouldn't be practical to add IEnumerable per class, so need to find an alternative approach
    Daniel Earwicker : If you use the pre-defined frameworks, you don't need those classes to define each table. The frameworks have the capability to automatically generate classes to provide you with a typesafe way to talk to the database.

What happens to a file when the connection through SFTP is interrupted?

I need to implement a file transferring from a web server to a SFTP server. When the connection is interrupted during the file transferring, what happens to the bytes already transferred?

From stackoverflow
  • Unless the SFTP server has built-in capabilities to resume an interrupted transfer (not sure if this exists, if it does it would probably be an addon), the file will not be transferred at all.

  • Short Answer: The part of the file already transferred is lost/garbage collected/marked for deletion

  • The most likely scenario is that a partially transferred file will now exist on the server.

    Few servers would mark such a file for deletion - it would mean that you could never resume a broken transfer, but would have to start again from scratch.

Best Practice for when to implement Ajax?

In this Microsoft tutorial they implement Ajax in the final step: "Step 7. Implement Ajax"

I know it's only a tutorial so it's kept simple but I have seen this idea elsewhere too, build the app and then sprinkle ajax where appropriate.

I implement ajax as I go but I am wondering what people feel is a best practice concerning this. Is it best to throw ajax on at the end or should you be implementing it as you go? Anyone ran into trouble one way or another?

From stackoverflow
  • What does 'as you go' mean ? I start with a design, and so it's very clear what the final product will be. So, yes, you need to know where the ajax stuff happens right from the beginning.

  • The argument for doing AJAX last is that you are much more likely to develop a site that degrades gracefully if you get it working without AJAX first.

    This does not matter to some people--I've seen (internal) sites designed for a specific version of a specific browser with a definite set of features turned on. But if it matters to you, it is much easier to add AJAX to a site that works well without it than it is to remove AJAX from a site that depends on it from the start.

  • I do it as I go, where it makes sense. It's rare that we have a complete design spec from day 1, so you have to make a descision sometimes.

  • Ignoring the fact that you'd expect the requirement for Ajax to fall out of the user interface specification requirements (if you have such a thing), I'd think about implementing it (or implementing the hooks) sooner rather than later.

    1) retro-fitting it to something that's been architected to return complete pages will not necessarily be straightforward

    2) it'll potentially impact the deployment pattern of your solution, in that it'll be serving up pages plus page fragments/objects, and those fragments will result in many more hits to your server (imagine the extra hits that your server would get if you introduced something like Google Suggest, where potentially every keypress could result in a new server request).

    So you may not want to implement the Ajaxness immediately. But I would urge you to consider it (and if its required) sooner rather than later.

  • If your core user interactions will rely on Ajax (e.g., Google Docs), then you should implement those bits early.

    Otherwise, if your core interactions rely instead on reliably storing and retrieving data, add Ajax last. This way, you force yourself and your team to test your app's behavior as if JavaScript were unavailable on your user's browser. In this instance, Ajax would be an extra layer of user interface goodness.

    Keith : As usual, never an absolute rule, sometimes I forget that. Thank you Ron.
  • Another argument for implementing Ajax early is that whenever you add any capability late in a project, you often have to tear down and rebuild some existing code or design to make it work. When you plan for the feature from the beginning, there's a lot less recoding required.

Facebook Toolkit 2.0 - Error when running it on a server with asp.net 2.0 installed

I'm getting the following error when trying to run the Facebook Connect toolkit on a server with ASP.Net 2.0 installed - I manually moved over System.Core and System.xml.Linq to the server from the 3.5 DLLs.

Most other things seem to run, until I try to access the "users" object - more specifically, the getinfo() method.

This is the error - anyone have any ideas? Thanks!

Method not found: 'Void System.Xml.XmlReaderSettings.set_MaxCharactersFromEntities(Int64)'.

[MissingMethodException: Method not found: 'Void System.Xml.XmlReaderSettings.set_MaxCharactersFromEntities(Int64)'.] System.Xml.Linq.XNode.GetXmlReaderSettings(LoadOptions o) +0 System.Xml.Linq.XElement.Parse(String text, LoadOptions options) +60 System.Xml.Linq.XElement.Parse(String text) +7 Microsoft.Xml.Schema.Linq.XTypedServices.Parse(String xml) +23 facebook.users.getInfo(String uids) +201 facebook.users.getInfo(Int64 uid) +34 content_FBLoggedIn.Page_Load(Object sender, EventArgs e) +481 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +15 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +34 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +47 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1061

From stackoverflow
  • googleing for *set_MaxCharactersFromEntities* suggests this might be a problem of 64 bit dlls on the server vs. 32bit dlls on the local development machine.

  • Just an update - I actually ended up installing the 3.5 DLLs on the server - I couldn't find any other solution that worked out.

    Once I did that, all problems went away.

    Thanks for the tip though Ax - the server is running in 32 bits though.