Sunday, March 27, 2011

In Inheritance: Can I override base class data members??

Lets say I have two classes like the following:

Class A
{
public:
..
private:
  int length;
}

Class B: public Class A
{
public:
..
private:
 float length;
}

What I would like to know is: 1) Is OVERRIDING OF BASE CLASS DATA MEMBERS allowed? 2) If yes, is it a good practice? 3) If no, what is the best way to extend the type of the data members of a class?

There is a class that satisfies my needs and I want to reuse it. However for my program needs, its data members should be of another type.

I have some books, but all of them refer only to overriding of base class member methods. Thanks everybody in advance.

From stackoverflow
  • You can use templatized members i.e., generic members instead of overriding the members.

    You can also declare a VARIANT(COM) like union.

       struct MyData
       {
            int vt;              // To store the type
    
            union 
            {                
                LONG      lVal;
                BYTE      bVal;
                SHORT     iVal;
                FLOAT     fltVal;
                .
                .
            }
       };
    
    strager : And if you want more functionality for a certain class (similar to subclassing), can he use template specialization?
    Vinay : yes he can specialize the templates
    OJ : This technically isn't overriding ;)
    Vinay : No need to override the members if members are generic for his case. Moreover it is a bad practice to override the members.
    OJ : Totally agree (see my answer below). Changing meaning of data members is a fail. But creating template instances isn't overriding :)
  • private means that the data cannot be touched, even by derived classes. Protected means that the derived classes can use the data-- so the length of class A won't affect length of class B. Having said that, it's probably best to have a virtual function GetLength() that's overridden by the derived class.

    (as has been pointed out, I'm wrong-- you can't use GetLength on different types, so if the base class is int and the derived class is float, then that's not going to work).

    strager : You cannot change the signature to override a virtual member function. Doing so would create a different virtual function.
    mmr : Maybe I'm explaining it badly-- if you have a virtual member function, the derived class can override the virtual function. The derived class then uses the derived method.
    strager : I know that. However, try compiling "virtual int GetLength()" in class A and "virtual float GetLength()" in class B.
    mmr : ah, good point. My bad.
  • 1) No you can't. You can create your own internal/hidden/private members, but you can't overide them.

    2) If you could, not it wouldn't be good practice. Think about it ;)

    3) You shouldn't, as you're changing the meaning of the member.

    strager : Think about virtual functions: aren't you changing the meaning of a call? However, I still agree with you; functions are valled, but members are used.
    OJ : I agree with redefining behaviour based on type. I do not agree with changing the underlying meaning of a data item. :)
  • While declaring a data member of the same name in a derived class is legal C++, it will probably not do what you intend. In your example, none of the code in class A will be able to see the float length definition - it can only access the int length definition.

    Only methods can have virtual behaviour in C++, not data members. If you would like to reuse an existing class with another data type, you will have to either reimplement the class with your desired data type, or reimplement it as a template so you can supply the desired data type at instantiation.

  • 1) yes it is allowed, as in, you can do it

    2) No! bad practice. If someone calls a method that use 'length' the variable being returned will be undefined.

    3) Try a different design. Perhaps you want to have a similarly named function or use the baseclass as a "has-a" class instead of an "is-a" class.

Does outsourcing make you worry about the security of your job?

Everyone says that IT work (the technical stuff like programming, etc) is always outsourced to India because the cost of living there is less, and thus the price of work (especially be freelancers) is therefore a lot less than someone doing the work in a developed country like the UK/USA. Being Indian myself (but living in the UK), I can vouch for the cost of living being less in India.

Do you therefore worry that the work we do and the number of jobs available is at risk?

My view is that there is more to choosing a supplier of solutions than just the cost. Companies in India may be cheaper but my dad works for an airline who is set to make its first loss in several years. They get work done by a company quite local to me (5 miles away), where I went for an interview a few weeks ago. This company (a startup too) probably charges more to ensure they make a profit. So if a loss-making company is choosing a supplier who is likely to set a higher price, then there is more at stake than getting work done by a company in a country with a low cost of living.

However, what is the freelance market like?

Also, I guess sometimes you need a coder who is physically onsite and at hand. It will be annoying and, in big companies, a major problem if you have some technical issue but have to wait 10 hours for the country on the other side of the world to wake up to fix the problem. Time is money. Am I right to think that server admin work is less outsourced? I say this because that line of work probably involves physical interaction with the server boxes, which obviously can't be done overseas.

Interestingly, there is now a lot of disruption about our jobs being given to workers from abroad who would settle for a lower wage, which is an important issue given unemployment rates/recession etc. It would be stupid to create more jobs and they are taken by legal immigrents.

However, a search of programming jobs in London results in thousands, but only 100 for accounting jobs (a profession which can be easily outsourced because I don't see where the need for a physical accountant is required).

What's your view? Is this a reason to worry about the safety of your job?

From stackoverflow
  • I'm in no danger of being outsourced, but whether my job will survive the economy is another matter.

  • Your job will be safe as long as what you do is relevant, simple as that.

  • It does not.

    One of the twelve principles of the Agile manifesto is:

    "The most efficient and effective method of conveying information to and within a development team is face-to-face conversation."

    Though some might see their work move offsite, those that follow certain Programming principles will always favor working in close quarters with colleagues, and this should give most of us some measure of hope.

  • It may depend on previous experience and whether the company as been "burned" before... In my company, we experimented with 3 different delocalized development teams a few years ago, it it went quite wrong (only one team - the one in headquarters - survived) - and these teams weren't even far apart (two different towns in Austria plus one in Germany), in the same time zone speaking the same language. Thus I'm pretty sure that outsourcing development will not be an issue for me :-)

    I've also seen the problems that some developers had when talking to a support person for some software we use who worked in India - since both were non-native English speaker (and apparently both had quite an accent), the conversions were really difficult and several misunderstandings occured. However, this may be less of an issue for native speakers.

  • Your job might change, but it will not disappear. More and more development work is required, it might be that in the future you manage a team of developers in India. Get ready for that, because that isn't a simple task.

    I notice that developers in India are very different from the local ones. They never say 'no', which leads to just hacking around crap to get stuff done within the deadline. Sometimes this is good, most of the times this means a lot of extra costs later on, to refactor all the crap. A lot of them are not that high skilled as you might think. Still there might be very good ones.

    But they are not here sitting next to the customer experiencing their needs. For some tasks this isn't required, for some it is. The amount of tasks where it is required might decrease in the future. But there is a lot of development work required in the next few years. I think this will probaly equal to the current workforce that is not working outsourced.

    dotnetdev : Funnily enough my dad manages a team of devs in India. Being a project manager and managing devs with Lotus Notes dev skills (the company I described who work for the company my dad works for does ASP.NET, C#, SQL Server etc).
  • Outsourcing is very different to working with a local team. To be able to work effectively with an outsourced team/company (and not end up regretting it) you need to have all the following in place before anything starts, at a bare minimum:

    • Specification (very, very detailed)
    • Acceptance criteria (use cases, automated tests)
    • Milestones
    • Coding standards (and ways of verifying, e.g. FxCop, StyleCop)

    This simply isn't possible on many projects because the spec can't be known in its entirety at the start. This is particularly true with things such as consumer websites where there's typically a fairly fast feedback cycle from user comments and usability studies, which can change the direction, priorities, and milestones of a project on short timescales.

    In addition, getting a very detailed spec along with all the acceptance criteria and tests can be very time consuming and expensive. And often you need to write the acceptance tests yourself and/or use another outsource company to do them so you know the results haven't been 'faked' (and yes, this does happen). And even then you still need a core team of product managers, dev leads, etc. to manage the offshore team while the project is under way. In my experience this can end up more expensive than doing it onshore in many cases, and many companies are starting to realise this.

    So no, I'm not worried about either losing my job, or having my job change to being the manager of an offshore team. Any fast moving projects that require interpretation of the spec and feedback loops with the product team and users will always need good onshore developers. And those are some of the most interesting projects to work on.

    eglasius : I disagree, specially as our teams work remotely, don't use that ton of overhead and ship effectively, with high quality and good code to clients. Of course, india rates are a third than mine :P :(
  • Problem is, developers for the most part don't make the decisions about whether or not it is cost-effective to outsource or not...managers do. And most managers, at least the higher-up ones are paid based on their short term performance, not the long term success of the company.

    90% of the people that aren't worried about their job being outsourced are kidding themselves (or have there head stuck in the sand) and they SHOULD be worried about it...or maybe just that final 10% is here on stack overflow.

  • Having worked on four projects in the last year that each had elements of outsourcing, the answer is a definite no.

    The biggest problem was time zone issues,

    None of the companies will repeat the experience and in future will only work with local developers.

    However, the new trend around here seems to be getting outsourcing companies to send their staff here to work locally. They are still paid at outsourcing rates.

  • If you work for a company whose main product is software, it doesn't make sense for them to outsource that software. That is their bread and butter, you just don't do it. Joel Spolsky made a statement which hits the issue dead on:

    If it's a core business function -- do it yourself, no matter what.

    Pick your core business competencies and goals, and do those in house. If you're a software company, writing excellent code is how you're going to succeed. Go ahead and outsource the company cafeteria and the CD-ROM duplication. If you're a pharmaceutical company, write software for drug research, but don't write your own accounting package. If you're a web accounting service, write your own accounting package, but don't try to create your own magazine ads. If you have customers, never outsource customer service.

    In his essay “The Pitfalls of Outsourcing Programmers”, Micheal Bean says companies have confused the chocolates with the box they come in.

    Why Some Software Companies are Confusing the Box for the Chocolates

    Recently, I bought some chocolates as a gift for some friends from a specialty shop. These chocolates are remarkable. Owner Jean-Marc Gorce makes them by-hand and his small shop has been rated as one of the top ten in the United States. In addition to being a chef, Jean-Marc is also an entrepreneur and an innovator.

    Jean-Marc recently started selling his chocolates in gold and blue boxes. I told him I liked the new boxes. He explained that his wife designed the boxes and he found a company in the Philippines that could produce the boxes in the small volume they needed for a good price.

    Jean-Marc’s gold and blue boxes are an example of successful outsourcing. Jean-Marc sells chocolates, not boxes. The design and production of chocolates is his core competency. Jean-Marc can outsource box production to improve his operational efficiency without sacrificing his reputation as a maker of superlative chocolates.

    While outsourcing boxes improves chocolatier Jean-Marc’s operational effectiveness, he would never consider outsourcing chocolate production because he would lose his core differentiation advantage. Yet, in their enthusiasm for cost savings, several US technology companies have done precisely that-- outsourcing their core technology and key strategic differentiator.

    So yes, if you are writing software for a non-software company then you stand to lose your job to outsourcing. However, if you find employment at a company that writes software as their core business then you are safe. No smart business would outsource their main competitive advantage.

    Phil : Nice comment and hit the point! Here is a joke for you. Try telling this to IBM that went to the extent of requesting American workers to emigrate to India and accept the pay native Indian developers get. $$$ this quarter has more power than long-term business viability evidently to some managers.
  • Time and execution risk are critical considerations in offshore development -- not just money. Anyone who has an opportunity to do so, should reiterate this point with stakeholders.

    All points raised so far about team communication are spot on, too.

    Actually, I've taken on work that was badly botched through outsourcing, with clients that finally realized that it was -- at least in their case -- a false economy. In one case the project was plagued with communication lapses and language barriers, but more importantly, the project was not properly specified, and there were no standards for workflow or coding. No one offshore noticed or cared. They just cobbled up some code that sort of did what they understood it should do (without any consideration for error handling, maintainability, best practices, or documentation). They just chucked it back over the wall -- here ya' go! And the client got what they paid for.

    Now, this may not be typical, and it could happen with a local team -- but it certainly would have not gone on as long as it did if it had been a local team.

    eglasius : @Clayton I agree with the overall info, but there are public examples of it going on long in big projects executed in the us. bottom line, it can go on wrong anywhere. +1 for not throwing the money on cheap rates that isn't saving anything.
  • I wouldn't count on the rates in India continuing to be significantly lower than the US and Europe. A modest condo in Bangalore goes for $250k these days.

    Unless you can outsource and pay < 30% of your current development costs, you will lose money.

  • I worked for a company that make a bit bet on outsourcing to India. Our company lost the bet very nearly sunk. Our company partnered with an Indian outsourcing firm and ramped up an office of more than a hundred developers in about two months, purportedly a CMM level 5 company (ha ha). I hear that these outsourcing firms don't get the best developers. The productivity of the teams was miserable. We sent one project over there that cost us half a developer onshore, and the offsshore team took 4 people, plus about a half-person onshore. After the offshore offices were shut down, we ended up re-writing most of the work that was done offshore.

    I'm not sure our experience was typical. We made a lot of mistakes. Most of the domain knowledge and technolgy experitise was still onshore, and we shipped the 'coding' offshore - which does not work (duh).

    I expect to see more development going off-shore, so I think there is reason to worry about keeping a particular job. I see the market for developers becomming increasingly global, and we're already seeing accounting, legal and other work moving offshore, however there will always be a market for local developers, especially for small projects and customers who value close interaction. I believe there will always be work, but the nature of the develpment may change, and we have to keep learning and be prepared for the shifts when they come.

    eglasius : that's the thing, no. of developers != value produced, and rate/hour != savings/hour. It varies a whole lot, and that's something that is just hard to deal with for everyone, clients and providers. It really gets in the way, when your rate is 3 times theirs, but the client hasn't gone through that.
  • A company really has to know what they want to have it built over-seas. If the specs could be provided along with the proper testing of the application, it could work. Most companies are not capable of this.

  • Worst case I just move to India.

    kenny : +1 lately my thought as well or another lost cost-of-living place.
  • Yes. I think we should worry about outsourcing. Most failures are due to the lack of understanding of how distributed development should work. There will be better commnucation, better spec, better tools, better team-building experience and gradual transfer of domain knowledge in the future. Capitalism will make outsourcng work eventually.

  • When I was working for an American web dev company the outsourcing failed. The communication barriers from language and geographic location were too much.

    So I don't think outsourcing of programming jobs from the US to other countries is a problem. What IS a problem is companies located in other countries making competing products (like Japan w/ the auto industry). This probably will be an issue because the programming culture is much stronger in other countries than in the US.

  • In a new market, wages creep up to match demand. Our company actively maintains groups in several regions, including India. Plans to save money by hiring in India were dashed when it was discovered that engineers would stay on long enough to receive domain training and build contacts, then would leave for a more lucrative position with a local competitor.

    So now, we have engineers in India, but only because they are good, not because they are cheap.

    The same story is playing out (for us) in Egypt right now.

  • I don't see outsourcing as a big concern about the safety of my job for a few reasons:

    1. My currrent job may be static, but I am not. The job I want will likely change over the course of time and I'll want to change what I do. For example, 10 years ago the web work I did was in Visual Studio 6.0 using C/C++ that has changed drastically between then and now. Do I expect just as big changes in the next 10 years? Why not, it makes sense that there should be these big shifts every so often so why not try to roll with it rather than try to be the hamster on the wheel that goes nowhere fast. I know that my mother and father managed to have their work be repetitive work over and over but that wouldn't work so well for me. My father delivered dairy products while my mom was a nurse for a centre that houses some developmentally disabled individuals.

    2. There will be some work done onshore by a few different entities: Consulting firms will still be around but may have a combination of workers from various countries and then there are the big IT departments that companies have that I don't really see going away for those industries with a fair amount of regulation. For example, I don't think I could see any of Canada's big banks using another company for all IS functions as that would leave them way too open to potential lawsuits and other headaches that are avoided by having customized systems that are almost always in a state of upgrade as there are better ways to do things that people are finding.

    Another point is that while I would imagine most people could program if given the proper incentives(picture holding people's families hostage and threatening harm if the code that is done contains bugs for a rather extreme example of this), that isn't realistic in most cases and so the tiny slice of the population that likes building systems or finding solutions to problems will be the ones having the jobs doing things that other people don't want to do.

windows live id web authentication sdk giving Java App server error

I'm making a website in c# .NET and using the windows live id web authentication sdk for my logins. I'm trying to deploy the site using server 2003 and IIS 6.0, but I'm getting the following error, when redirecting from the login page to webauth-handler:

"HTTP Status 405 - HTTP method POST is not supported by this URL type Status report messageHTTP method POST is not supported by this URL descriptionThe specified HTTP method is not allowed for the requested resource (HTTP method POST is not supported by this URL).

Sun Java System Application Server 9.1"

This used to work fine when developing on localhost, but broke once I tried to deploy my website. When I deployed my site I set up a new project on lx.azure.com, and added the application id and secret key to my web.config.

I'm confused by the Java error, as I have no Java application server running, and don't use Java on the server at all !

If anyone knows why this may be happening, I would really appreciate your knowledge :)

From stackoverflow
  • The problem was with my url forwarding - it was putting a frame into my html, so the error was most likely occurring on the domain name companies site, hence the java error.

Can I use a RAM disk to speed up my IDE?

Duplicate:

RAMDrive for compiling - is there such a thing?

I have an idea how to speed up my IDE.

I want to create a RAM disk and move my solution onto this virtual disk.
I think that this can speed up the IDE because RAM is much faster than a HDD.

Has anyone done this before?

PS: I think, when I have some documents in my program(real world) which are used frequently(for example some document templates) it could be good idea to move these documents onto a RAM disk as well to speed up I/O. Am I wrong?

If power is a problem, a UPS could solve it.

From stackoverflow
  • I have a 128GB Samsung flash based hard drive and it is FAST. My whole system, VM and IDE included, load in less than one minute.

    : Are you being sarcastic?
    Otávio Décio : Not really. It is a Dell Precision laptopt with 8GB memory, XP64, 128GB flash drive, Extreme processor. Darn fast, honest.
  • I do remember reading about doing this with netbeans a while ago. This article has quite a good guide on doing it in linux.

    NetBeans on speed

    Currently can't find an article on how to do it in windows, however I know it's possible.

  • Personally I'd just buy a SSD disk, you could lose your whole soln at any time if your ram loses power.

    Right now I have 4gb of ram and a 150gb 10k rpm velociraptor hard drive for my boot disk, running win xp pro 64bit and everything(VS 2008, sql management studio, and my testing VM's) is very fast.

  • Honestly, if you have Vista/Windows Server 2008 x64 and you jam your workstation with 4 to 8 GB of RAM, for most tasks, everything will be in cache or stored by SuperFetch, which will be a lot easier to manage and just about as high performance as a RAM disk. The RAM disk won't do a thing for you if you're starving your other system RAM to make it work.

    BTW, I tried your suggestion a couple of years ago. While it technically worked, copying the necessary data to the RAM disk on every boot took too long and was a pain.

  • Given sufficient RAM, this problem's been solved for a long time. If you have lots of RAM, stuff from the HD gets cached in it anyhow, and the HD is only the bottleneck the first time you're loading something at boot. As far as the initial boot time, I'd suggest using sleep/suspend mode and simply not rebooting your computer frequently.

  • I just read a post on this very subject: http://nesteruk.org/blog/post/Cheap-way-of-speeding-up-Visual-Studio-IO.aspx

    Dmitri Nesteruk : He-hey, that's my blog! Needless to say, I've been doing this for quite some time.
  • I have this one with 8GB attached, and it's so sweet ;) Dual SATA RAID0, it knocks the socks off SSD's.

  • This might not buy you much. If you use up memory with your RAM disk, you just going to deprive the OS RAM for virtual memory, and that is going to be causing more frequent page faults and thus potential more writing to the hard disk.

    You could loose your whole soln at any time if your ram looses power.

    Element is absolutely right. Your machine could lock up at at any time for whatever reason. If you do decide to use RAM disk at least have a batch file on your desktop that copies everything to disk, and run it frequently.

  • I posted an answer to a similar question previously. As a summary: yes it's possible, I used to do it with my browser (there is a link on how to get it running under linux).

    As other people have mentioned if you have a tonne of RAM that stuff will be cached for you anyway, but imo having a ram drive is a bit more explicit than just letting the OS try to deal with it.

    If you know that you want 200mb of code in memory all the time then you know you can put in a ram drive and acheive that.

    I do wonder however if the OS will cache it twice (once on the ram drive, once itself) if it doesn't know it's on a RAM drive...

  • Under Linux /dev/shm is a ramdisk so you might have some quick success using that as your compile destination (easier than rsyncing your source etc, also has the benefit, if you have one, of reducing any wear on an SSD in your machine).

How can I execute Python code without Komodo -ide?

I do that without the IDE:

$ ipython
$ edit file.py
$ :x (save and close)

It executes Python code, but not the one, where I use Pygame. It gives:

WARNING: Failure executing file:

In the IDE, my code executes.

From stackoverflow
  • If something doesn't work in ipython, try the real Python interpreter (just python); ipython has known bugs, and not infrequently code known to work in the real interpreter fails there.

    On UNIXlike platforms, your script should start with a shebang -- that is, a line like the following:

    #!/usr/bin/env python
    

    should be the very first line (and should have a standard UNIX line ending). This tells the operating system to execute your code with the first python interpreter found in the PATH, presuming that your script has executable permission set and is invoked as a program.

    The other option is to start the program manually -- as per the following example:

    $ python yourprogram.py
    

    ...or, to use a specific version of the interpreter (if more than one is installed):

    $ python2.5 yourprogram.py
    

My site has to support multiple languages, how do I route in rails?

I have several resources and I want to somehow have a :lang parameter in the header. So /en/posts/ would direct to posts_controller/index with params[:lang] == en.

How is this usually done? It looks ugly to have to add ?lang=en on every link and feel very un-railsy.

I think you know what I'm talking about even if I'm not very clear, what is the norm here?

From stackoverflow
  • One way of doing it would be:

    map.resources :yourresources, :path_prefix => '/:lang'
    

    You can also add language parameter filter:

    map.resources :yourresources, :path_prefix => '/:lang', :lang => /(en|de|jp)/
    
    ique : How would i then go about creating links with the resource? Would i have to do link_to myresource_path(:lang => "en") or could i set some sort of standard language to use when no argument is given?
    Milan Novota : Yes, you'd have to pass some language into the link helper. However, you can set a default language in the environment.rb (as a constant) and then create a method (say get_language) in Application controller that will return this default value if params[:lang] is not set.
  • Seems like the newest feature scheduled for Rails 2.3 is right up your alley: http://afreshcup.com/2009/02/01/rails-23-localized-views/

    Orion Edwards : While cool, this doesn't address routing...
  • Aren't you supposed to eschew routes entirely, and set language based on the browser's HTTP Accept-Language header? See Here.

    I'd imagine you should give your users an option to change their language, and then store that value in a cookie, which could then be pulled out in a before_filter.

    Why do this? Well, "HTTP best practice" states that each resource should have one URI. An article is still the same article, whether it's in spanish or english, so should have the same URI. site.com/en/article and site.com/es/article are clearly not the same URI, which violates this principle

    ique : But not beeing able to link the correct language would cause people who visit a page to think they've come to the wrong page? If you speak english and get a link from an english friend and then come to a page that's filled with swedish. One would be quite confused?

What is the best way to implement an AJAX main menu?

I'm building a site using ASP.NET MVC with lots of jQuery and AJAX stuff, and I'd like the main menu to work with AJAX as more or less the rest of the site does.

In an ideal scenario, I would like my main menu to 1) load the main content with AJAX if the user has activated javascript 2) change the url in the address bar (to enable link copy-pasting) 3) have my code in only one place, meaning that I don't want to have the same markup in an .aspx View and an .ascx PartialView.

Number 1) I have no problems with. However, I have no idea how I do to accomplish number 2) without a reload of the page. Any ideas?

I realize that the third issue could be solved by creating a View that renders each PartialView, but is there no nicer way around that? Some way to "wrap" the PartialView in the site Master or something at the Controller, before returning it to the client?

From stackoverflow
  • For #2: You can add a #hash to the end of the url: example in your menu:

    <a href="#helppage" onclick="opento('helppage')">Help Page</a>
    

    And then in your body on load:

    if(document.location.hash) { opento(document.location.hash); }
    

    For #3 I don't know ASP. Sorry.

    Tomas Lycken : Well, as I'm using ASP.NET MVC, all my urls will look like example.com/Home/About or example.com/Products/List - in other words, they need to actually point to different locations. Do you know any way to change the address bar's actual location, without loading the new page?
    Isaac Waller : There are libraries that do this and create urls like: example.com/#/Home/About using the technique I described above. Maybe if this is sufficient. If worst comes to worst, you could just create a giant Iframe that takes up the whole page, but uses my technique to add a hash to the main url. Isaac
  • I'm inclined to agree with Rob (though I won't vote you down ;)). JavaScript techniques like Ajax shouldn't be core to your site without good reason. I'd recommend searching Google for terms such as 'progressive enhancement' and 'unobtrusive javascript'.

    Build a nice RESTful site and then enhance with JavaScript and Ajax in places where it enhances the user experience to a significant degree.

  • Though I agree with some of the other comments about not loading content via AJAX for the sake of it, #2 in your list is a common problem that often crops up when dealing with AJAX and is worth answering.

    The only way to change the URL without the browser reloading is by adding a # to the end . Example:

    http://www.yoursite.com/Content/About/#some-identifier
    

    There is no other way. You can look at sites like Facebook and Google Mail/Reader to see an example of this implemented.

    Hope it helps...

Coldfusion WSDL times out on first call

I have a web service setup for on a small part of a website and while the site overall gets a good amount of traffic this particular service does not. Once a day when I go to send a request through the web service it will fail on the first attempt, but retrying the request a second time works just fine. It's as if it was no longer cached in memory and times out while starting up.

Is there a way to keep this service active either on my end or on the web service provider's end which is also CF app (separate division of our company)? It's a bit hard to troubleshoot because it only happens once after a long period. And I don't want to setup a separate process just to keep pinging this service.

From stackoverflow
  • Try increasing the requesttimeout and see if that helps.

  • If the server is being restarted regularly between calls to the template, ensure the "save class files" setting is enabled in the administrator (under caching) to prevent the template from being recompiled after each server reload.

  • You can try to use following method on a webservice client side. CF7+ got built-in coldfusion.server.ServiceFactory Java service.

    Code can look like

    <cftry>
        <!--- here goes attempt to invoke service method, maybe dummy "ping" --->
    <cfcatch type="any">
        <!--- trying to refresh WSDL --->
        <cfset createObject("java","coldfusion.server.ServiceFactory").XmlRpcService.refreshWebService(ServiceURL) />
    </cfcatch>
    </cftry>
    <!--- usual code --->
    

    Hope this helps.

    Note: this factory contains a lot of useful methods, but almost zero documentation over the internet. Good idea would be to dump it and explore a bit.

Data URL / PNG from UIImage

Hello, I have a iPhone program that has a UIImage. This UIImage needs to be transferred to a javascript Image object in a UIWebView. I was thinking this could be done by using a data url I send to the UIWebView like this:

[wview stringByEvaluatingJavaScriptFromString:@"loadimage('%d')",dataurlfromuiimage];

So, I need to transfer my UIImage into a Data: URL. I could do this myself if I can just get the PNG data, but I cannot find how do do that either. If there is a better way to send this to the WebView, that would be good also.

From stackoverflow
  • Unfortunately, you'll need to convert your UIImage to the a file representation of your image, not the decoded pixels information that is stored in the UIImage structure. That is, you'll need to somehow write it to a temporary file and get the raw NSData bytes for the file (probably JPEG ou PNG). Then use a BASE64 encoder. I don't think it is already provided by Apple, so might you wanna look at this article: http://www.cocoadev.com/index.pl?BaseSixtyFour

  • To get an NSData representation of your image in the PNG format, use

    NSData *dataForPNGFile = UIImagePNGRepresentation(yourImage);
    

    Likewise, a JPEG representation can be obtained using

    NSData *dataForPNGFile = UIImageJPEGRepresentation(yourImage, 0.9f);
    

    Once you have the NSData, you could write it to your Documents directory using writeToFile:atomically:, and then I believe you can pass it in as a local URL (although I've not tried this). An alternative is to use the Base64 NSData category that François P. references and somehow send it to JavaScript as Base64.

    rpetrich : You may not want to use the base64+javascript method for large images as is uses a lot of memory--Safari will keep both the base64 and decoded versions loaded. Should work well for small images or if you only have a few large ones

When should I add a GUI?

I write many scripts at home and on the job. Most of the time the scripts get used only a few times to accomplish their chosen task and then are never used again. However, sometimes I write a script to do something more complicated, something that requires user input. It is at this point that I usually agonize over whether to implement a GUI or stick with a y/n, press 1-10, etc. command-line interface. This type of interface can become tedious to use and difficult to maintain.

I know some things lend themselves to a GUI more than others, such as selecting things in a giant list. However, the time it takes to switch a command-line application to use a GUI is prohibitive. For me, it takes a good amount of time to add a GUI with even the most simple framework I can find.

I am curious if any developers have a method of determining at what point their script has grown enough to need a GUI. Or am I going about this the wrong way, should I always be writing my scripts assuming I might later add a GUI?

From stackoverflow
  • As with many questions of this type, the answer is that it depends.

    If your program/script does just one single thing by receiving a number of inputs from the user, it is better to stick with the non-GUI mode.

    If the application is doing more than one thing and if you think that the user will use the application to do a lot of stuff, you may consider using a GUI.

    Are you planning to distribute this program to others? Then it is better to provide a GUI.

    If the users are non-technical, a GUI is a must!

    Thats it.

    he_the_great : On your last point, it is not always the case. Depending on if the customer is a customer or coworker. If the tool is simple enough, command-line is is usable for a coworker.
  • When you want to hand your stuff over to someone else in a discoverable way. Command-line scripts are awesome because they are simple and elegant, but they are not very discoverable. That is, if you were to hand your scripts over to someone else with no documentation, would they be able to figure out what they are and how to use them? If your tasks are so simple that myscript /? will explain what you need to do fully, then you don't need a GUI.

    If on the other hand, you are handing your scripts over to someone who isn't so technical, or needs some more visual guidance about the task to be done, than by all means, a GUI is a good way to go. You might even want to keep your scripts as they are and just create a separate GUI that runs them for maximum flexibility.

  • This doesn't answer your question but FWIW an intermediate step, between UI and command-line, is to have a configuration file instead of a UI:

    1. Edit the configuration file
    2. Run the program

    A configuration file format can, if necessary, be complicated and well-commented.

  • I think this decission also depends on the audience who will be using your script: If it is people who are comfortable working with the command line, then there is not pressing need to add a GUI as long as your script has a good /help which explains all the parameters it accepts. But if you want the "average user" to be able to use your program, I'd rather add a GUI because otherwise your program might not be intuitive enough for that user group.

  • If you only need some "Dialogs" to improve your scripts, you can use KDE Kdialog or Gnome Zenity.

  • I can't count the number of times I've written what I thought would be a 'one-off' and it became more useful than I thought and ended up writing a GUI for it, or I've need to come back to use a program months later. The advantage of the GUI is it makes it easier to remember what would otherwise likely be command line arguments. I.e. for flags and options you can simply use check boxes, combo boxes, radio buttons, and file selectors filenames. I use Borland C++ RAD so it is quite quick and easy to throw together a simple (or even not so simple) dialog box. I now often start with creating the GUI.

  • If you use Linux, try Zenity. It's an easy to use tool to make a GUI for command-line programs.

Extending borders

I have an image that will be centered (left and right) in the window, there is no left border,but there is a right border. I was wondering if it is possible for the top border to go from the very left of the page (past the image) and stop at the right border and for the bottom border to start at the left end of the image and stretch across all the way to the right of the window. The top and bottom borders are made of two different repeating backgrounds and the left border can be too, if needed.

I've been thinking about this for a while but couldn't come up with any solutions...can someone help me?

From stackoverflow
  • You might want to clarify how flexible you're willing to be. You can approach this multiple ways. Do you want the top and bottom borders to extend to the edge of the viewport (thus needing them to be fluid-width)?

    You can handle this using background images with background-position and a sliding door technique, or you can use extraneous markup to create a three-column fluid width layout with your image in the center.

    It is up to you but with the three-column technique, you could insert your extra divs (or whatever you would like to use) via JavaScript so you wouldn't have empty containers in your source, and use border-top and border-bottom instead of background images (thus shedding some load-time off of the page).

    Edit: And to clarify, you want it to look something like this Ascii drawing:

    _______________
                   |img|_____________________
    

    Edit: For the fluid width layout, consult one of many numerous sources on CSS Layouts, here's a good rundown: http://www.smashingmagazine.com/2007/01/12/free-css-layouts-and-templates/

    Then on your left and right columns, you would do border-top and border-bottom respectively (or use background-images if you want fancier borders), give your image borders and have the height of your three containers set so the borders line up together. Hope that helps.

    eerabbit : Basically, yes. The borders would also be over and under the image itself. What is the three column technique?
    eerabbit : I think I get it; thanks a bunch!

What is best-practice when designing SOA WCF web-services?

Given an operation contract such as:

[OperationContract]
void Operation(string param1, string param2, int param3);

This could be redesigned to:

[MessageContract]
public class OperationRequest
{
    [MessageBodyMember]
    public string Param1 { get; set; }

    [MessageBodyMember]
    public string Param2 { get; set; }

    [MessageBodyMember]
    public int Param3 { get; set; }
}

[MessageContract]
public class OperationResponse
{
}

[OperationContract]
OperationResponse Operation(OperationRequest request);

One thing I like about the MessageContract is that I get a little more explicit control over the format of the SOAP message.

Similarly, I could write nearly the same code, but use a DataContract:

[DataContract]
public class OperationRequest
{
    [DataMember]
    public string Param1 { get; set; }

    [DataMember]
    public string Param2 { get; set; }

    [DataMember]
    public int Param3 { get; set; }
}

[DataContract]
public class OperationResponse
{
}

[OperationContract]
OperationResponse Operation(OperationRequest request);

One thing I like about the DataContract is that I can define IsRequired, Order, and Name.

Today I expect the only consumer will be a WCF client. However, I want to design contract first and adhere to SOA practices as much as possible. Rather than have WCF dictate my SOAP, WSDL, and XSD, I want the XML to define the WCF layer, but use WCF to generate this so as not to add any custom message processing to WCF. I want to follow the most common SOA XML conventions which I believe is probably all tags beginning in lowercase - am I right? And I want to be as version tolerant as possible.

Is it wise to always create Request and Response messages like this? Which of the three formats promotes the best SOA practices? Should I go one step further and define both a DataContract and a MessageContract whereby the MessageContract only contains the DataContract? Or should I only ever use DataContracts if I am truly exposing a new type (i.e. do not create message types as containers)?

A loaded set of questions I know, but I am trying to get to the heart of it, and I am not sure separating the questions provides sufficient context to get the answer I am looking for.

From stackoverflow
  • I think it honestly depends on the scenario. If you're just exchanging simple types [i.e. string], OperationContract is certainly acceptable.

    You should use MessageContract when you want to modify the message format and DataContract should be leveraged when you want to express a Complex Type, such as an address.

  • YAGNI comes to mind here.

    I say don't over do it on getting too fancy looking to the future. WCF is already nicely SOA friendly. I say, in general, stick to the defaults and be careful about coupling until you have a specific need to do something elaborate.

  • XML usually tends to be camelCased. WSDL and XML Schema use camelCasing for both elements and attributes, for example:

    [http://www.w3schools.com/wsdl/wsdl_syntax.asp][1]
    [http://www.w3schools.com/Schema/schema_howto.asp][2]
    

    Why SOAP was defined differently, I do not know, but it uses PascalCasing for elements and camelCasing for attributes, see:

    [http://www.w3schools.com/soap/soap_header.asp][3]
    

    Similary, most of the WS* specs (maybe all) use PascalCasing for elements and attributes, see: http://ws-standards.com/. XML Schema is agnostic about the conventions of the types it defines for XML.

    Thomas Erl (http://www.thomaserl.com/) writes many important books on SOA (http://www.soabooks.com/) including "Service-Oriented Architecture". In Chapter's 13 and 15 he provides a number of examples of the XML of the various parts of typical transactions. It define types and object members in XML Schema using PascalCasing which nicely matches the normal patterns of C# for class and property naming. Thus WCF defaults already closely match the standards.

    Regarding actual message naming, some conventions use camelCasing and others use PascalCasing, so my preference is to match the primary language needs, which in the WCF case is PascalCasing. And WCF defaults match some examples of how the request and response message should be written: http://www.w3schools.com/soap/soap_example.asp.

    So the only outstanding question is now the basic question of how much to standardize around the use of OperationContract, DataContract, and/or MessageContract.

    Defining DataContract only when a you have a complex type (in XSD parlance) makes sense, and I tend to think YAGNI as pointed out by Terry is the correct choice, but Erl seems to suggest a much more process intensive version so I am still not sure the best approach to use as my default choice (the main part of the question).

    Damian Powell : Why is this answer community wiki? This is a great answer and vjrobinson should get the reputation he/she deserves for it.
  • its always a best practice not to have multiple parameter in a operation contract, always have a type that wraps all the required parameters, this will help out in the long run. Your existing clients won't break when you add a new optional parameter.

    I work in a business integration team where we integrate with other companies fairly regularly, (AT&T, Cox, Exxon ...) and have never seen a web service call that took more than a single parameter.

multi form page issue in asp.net mvc

Inside a form for adding a category I am rendering a control that contains a second form for adding a subcategory. The category form's action is category/add while the inner form's action is ../addSubcategory. My problem is that when submitting the inner form, my Add action is called.

So, my question is this: how can i make my inner form fire action addSubcategory?

Any help would really be appreciated!

From stackoverflow
  • in case you're using AJAX to post, you could write your own js methods that fire on the click of the submit button, calling the correct action methods.

    in case you're not using AJAX, i would recommend separating the two forms in your markup, and then aligning the subcategory form inside the category form with css. use an empty div as a "spacer" in the category form, to make room for the subcategory form.

    semi-pseudo-example, to (hopefully) make it more clear what i mean:

    <form name="categoryForm" id="catForm" action="/Category/Add">
    <!-- some form elements here... -->
    <div id="subCategorySpacer">&nbsp</div>
    <!-- maybe some more form elements... -->
    <input type="submit">
    </form>
    
    <form name="subCategoryForm" id="subCatForm" action="/Category/addSubcategory">
    <!-- form elements here too. and a submit button -->
    </form>
    

    and then you move the subcategory form into the place where you have your spacer div, using css. naturally, this requires you to know the exact sizes of the forms, or to use javascript to resize them on the clientside.

  • I am concerned that you say "inner form." You can have two (or more) forms on a page, but you cannot nest them. This is probably what is causing your bug.

    With or without AJAX, you need to have two, separate forms on the page. I strongly recommend using an actual form, even with AJAX submission, because if you do not do this, you will not be able to gracefully fall back to regular form submission for users who have disabled JavaScript. The ASP.NET MVC Ajax.Form helper will take care of this for you. Make sure to test Request.IsAjaxRequest inside your controller action to determine whether to return JSON or a new View.

Why does cout print char arrays differently from other arrays?

I'm using C++ to understand how exactly pointers work. I have this piece of code using arrays, which I'm using just to understand how the equivalent works with pointers.

int main() {    
    int arr[10] = {1,2,3};    
    char arr2[10] = {'c','i','a','o','\0'};
    cout << arr << endl;
    cout << arr2 << endl;
}

However when I run this, arr outputs the address of the first element of the array of ints (as expected) but arr2 doesn't output the address of the first element of the array of chars; it actually prints "ciao".

What is it that I'm missing or that I haven't learned yet about this?

From stackoverflow
  • There is a standard overload for char* that outputs a NUL terminated string.

  • Because cout's operator << is overloaded for char* to output strings, and arr2 matches that.

    If you want the address, try casting the character array as a void pointer.

  • It's the operator<< that is overloaded for const void* and for const char*. Your char array is converted to const char* and passed to that overload, because it fits better than to const void*. The int array, however, is converted to const void* and passed to that version. The version of operator<< taking const void* just outputs the address. The version taking the const char* actually treats it like a C-string and outputs every character until the terminating null character. If you don't want that, convert your char array to const void* explicitly when passing it to operator<<:

    cout << static_cast<const void*>(arr2) << endl;
    
  • While casting is probably a more meaningful approach, you could also use the addressof operator:

    cout << &arr2 << endl;
    
    D.Shawley : Probably want `&arr2[0]` instead of `&arr2` here
    Rob Kennedy : Those are the same addresses, Shawley.
    Mystic : An array in C/C++ is just a series of memory locations. So the address of array is the address of the first element. Wonder why this was voted down
    MSalters : Mostly because it doesn't explain the idea - a char[10]* does not convert to char*.

What is the best way to refresh a rollup table under load?

I created a table in my SQL Server 2005 database and populated it with summary and calculated values. The purpose is to avoid extensive joins and groupings on every call to the database. I would like this table to refresh every hour, but I am not sure the best way to do this while the website is under load. If I delete every record and repopulate the table in one transaction will that do the trick or will there be deadlocks and other trouble lurking?

From stackoverflow
  • It depends on the relationships in your database and the queries you run against it.

    If it's a summary table that can tolerate stale data, you could populate it using queries that perform their SELECTs without locks using the NOLOCK join hint. NOTE: use of the NOLOCK hint should only be done when you are sure of the consequences.

    There is often scope for re-tuning indexes, to reduce loading.

    jedatu : Thanks. However, I am worried about the rollup table not the normalized tables. The rollup table is fielding web requests while I want to delete all its rows and reload it.
    Mitch Wheat : @jedatu: can you update the summary values instead of deleting them?
    jedatu : @Mitch-Wheat: for part I could, but I also may need to remove records and that would require one or more large "NOT IN" queries
  • I have decided to build up the data in a @temp table variable. Then I will copy rollup ids into the temporary table where they match. Finally, I will add, update and remove rows in the rollup table based on the @temp table.

  • You could also create an indexed view depending how heavy your load is this might a good choice

  • The way I have done this in a few projects is to use two copies of the table in different schemas. So something like:

    CREATE SCHEMA fake WITH AUTHORIZATION dbo;
    CREATE SCHEMA standby WITH AUTHORIZATION dbo;
    GO
    
    CREATE TABLE dbo.mySummary(<...columns...>);
    
    CREATE TABLE fake.mySummary(<...columns...>);
    GO
    

    Now create a stored procedure that truncates and re-populates the fake table, then in a transaction move the objects between schemas.

    CREATE PROCEDURE dbo.SwapInSummary
    AS
    BEGIN
        SET NOCOUNT ON;
    
        TRUNCATE TABLE fake.mySummary;
    
        INSERT fake.mySummary(<...columns...>)
            SELECT <expensive query>;
    
        BEGIN TRANSACTION;
            ALTER SCHEMA standby TRANSFER dbo.mySummary;
            ALTER SCHEMA dbo TRANSFER fake.mySummary;
            ALTER SCHEMA fake TRANSFER standby.mySummary;
        COMMIT TRANSACTION;
    END
    GO
    

    This is probably about the shortest amount of time you can make users wait for the new data to be refreshed and without disrupting them in the middle of a read. (There are many issues associated with NOLOCK that make it a less desirable alternative, though admittedly, it is easy to code.) For brevity/clarity I've left out error handling etc., and I should also point out that if you use scripts to synchronize your databases, make sure you name constraints, indexes etc. the same on both tables, otherwise you will be out of sync half of the time. At the end of the procedure you can TRUNCATE the new fake.MySummary table, but if you have the space, I like to leave the data there so I can always compare to the previous version.

    Before SQL Server 2005 I used sp_rename inside the transaction to accomplish exactly the same thing, however since I do this in a job, I was glad about switching to schemas, because when I did, the non-suppress-able warning from sp_rename stopped filling up my SQL Server Agent history logs.

JQuery Passing Variable From href to load()

I am trying to pass an href id to load() in JQuery, I can see from the alert the returned id# 960, so I know the id value is getting past I just don't know how to append the load url, well the $("#refreshme_"+add_id) is the important part, I use that to refresh the unique div id in the page assigned by the database pull, so it would be looking for id="refreshme_960". My simple example that is obviously not working. I want to refresh the db select part of the page showing the new list of cars. Any ideas?

$(document).ready(function() 
{
    $(".add_to_favorites").livequery("click", function(event) 
    {
        var add_id = this.id.replace("add_", ""); //taken from a.href tag
        alert (id);
        $.get(this.href, function(data) 
        {
            $("#refreshme_"+add_id).load("http://www.example.com/car_list/"+add_id);


<a class="add_to_cars" href="/car-add/960/add'}" id="add_960">Add Car</a>
From stackoverflow
  • The 'var add_id' line looks a bit odd to me.

    what happens when you alert(add_id)? And why not just use id, whats with this add_id?

  • Sorry about that it's actually alert(add_id), I was going to use ID but I have delete function to and wondered if that would get crisscrossed. Can you tell I am a n00b trying make the pieces work.

  • This is a bit confusing, are you have 'add_to_favorites' as your trigger, and then add_to_cars in the html element.

    too keep your replace id all jqueryish, i think you would use

    $(this).attr('id', 'add_');
    

    I think what you are trying to do is that when a user clicks the 'add_to_cars' link, that it adds that from an ajax response into the favorites? Though i could be wrong.

    I would question why use add_960 as your id instead of just 960 (I know you aren't supposed to use numbers as id's by I do it all the time and haven't had an issue yet).

    So you could do

    $('.add_to_cars')livequery('click', function(event){
         var request=$(this).attr('id');
      $.ajax({
               type: GET,
               url: "http://www.example.com/car_list/"+request,
               success: function(response){
                    $('.add_to_favorites').append(response)
    
    
          }
    });
    });
    

    or something like that. again, not entirely sure what you are trying to do.

  • That is correct, trying to add the cars to a favorites. How it's suppose to work is the visitor add to favorites from the list of viewed cars, it gets added to the list on their side panel of the site. Then the add to favorites button goes away an is replaced by delete favorites.

    I guess I could use numbers for the id and that would make life more straight forward.

    In your example I plugged that in and when I clicked the add to favorites button the jquery did not do it's magic and I spilled out onto the /car-add/ function page. Here is what I have so far including your example. Maybe you can see where I am going wrong.

    $(document).ready(function()
     {
      $('.add_to_cars')livequery('click', function(event)
         {
         var request=$(this).attr('id');
      $.ajax({
               type: GET,
               url: "http://www.example.com/features/car_list/"+request,
               success: function(response){
                    $('.add_to_favorites').append(response)
          }
      });
       });
        });
    
    jacobangel : your code here is not correct due to syntactical error. Is this just an error copying it over, or is the code not working because of this.
    dvancouver : I copied it over this way, I am not sure what is wrong with it, I am not seeing it:(
  • Alright I corrected the above code and things are at least processing but the thing is now the list is appending strange, I'll explain but first my working code thus far, I changed the names for the DOMs to make more sense but other then that everything is the same..

    $(document).ready(function() 
       {
    $('.add_to_favorites').livequery('click', function(event)
        {
         var request=$(this).attr('id');
      $.ajax({
               type: "GET",
               url: "http://www.example.com/features/favorite-add/"+request+"/add",
               success: function(response){
                    $('#favorite-listings').append(response);
          }
      });
    return false;
        })
      });
    

    Ok like I said things are working out but the list appends odd for example.

    first saved shows their saved car.

    Your saved cars: -Pontiac 2005 Sunburst

    second favorite saved this happens and so on.

    Your saved cars: -Pontiac 2005 Sunburst

    Your saved cars: -Pontiac 2005 Sunburst -Ford 2006 Thunderbird

    It's duplicating instead of just adding. Not sure what I am doing wrong?

  • What is happening is the database call is pulling in the new results with the old results above those, another favorite and a new set of db results below that. When I do a hard refresh everything looks fine and the list is correct. Now my question is how do I stop that from happening? I want refreshed the list to replace the old, I could separate out the database insert and results and just hit the append (db insert) to /add-favorite.php/ then send a call to the actual queried list at /list-favorites.php/ with a load(), is this the best way to work this?

  • You are using append() in this line

    $('#favorite-listings').append(response);
    

    Which is why you get new results added after the old ones. Change that line to

    $('#favorite-listings').html(response);
    
    dvancouver : That worked very nicely, thank you!
  • That worked just perfect, thanks and thanks to everyone that helped out thus far! What I am trying to do next is update the button section, there is a "if favorite save then show remove button" and the opposite for remove. Plus a notice to say things are "Updating..." I have tried a few things and they work but are they efficient, that's the thing. My JQuery code to remove and add favorites. Any tips on how to incorporate or stream line, please let me know.

    //Remove Favorite    
    $(document).ready(function() 
                   {
                $('.remove_from_favorites').livequery('click', function(event)
                     {
                 var request=$(this).attr('id');
                 var url = $(this).attr('href');
                    $('li:has(a[href="'+url+'"])').remove();
                    $('a[href="#fav-places"] > span').text('('+$('ul#favorite-listings > li').length+')');
        //updating message                           
                $('#loading_'+request).show(100);
    
        //refresh DOM that holds if then statement for add/remove buttons   
    
                $('#refreshme_'+request).load("http://www.example.com/global/favorites_add_remove_buttons/"+request);
                return false;
            })
        });
    
    //Add Favorite        
        $(document).ready(function()
               {
        $('.add_to_favorites').livequery('click', function(event)
                 {
             var request=$(this).attr('id');
          $.ajax({
                   type: "GET",
                   url: "http://www.example.com/features/favorite-add/"+request+"/add",
                   success: function(response){
                        $('#favorite-listings').html(response);
              }
        });
    
        //updating message
                $('#loading_'+request).show(100); 
    
       //refresh DOM that holds if then statement for add/remove buttons
                $('#refreshme_'+request).load("http://www.example.com/global/favorites_add_remove_buttons/"+request);
          return false;
        })
     });