Archive for the ‘ASP.NET Code’ Category

Support Architecture for your Web Application

August 16, 2012 Leave a comment

Whenever I start a new project, in particular a web project of some kind, there are several steps that I take in preparation for scaling the project or supporting different business functions. Let’s face it, not everything that the website might expect to do should be done by the website; a classic example of this might be notification that a subscription is about to expire. That sort of occasionally scheduled business process is certainly doable in the context of web development, but in general it’s not recommended.

One that might be less apparent might be tasks such as general email delivery, say a forgot password email, where under high traffic one might want to off-load the email delivery to a secondary process rather than hold up rendering of a web page to the browser while the email is built, formatted, and ultimately delivered.

To support a wide variety of potential offline processes, I generally will setup what I might refer to as “harnesses” for three different types of processing. These harnesses are fairly generic, interacting with an interface implementation and typically based of the standard .NET configuration model for instantiation of the class implementing the interface. In many cases, the interface implementation is common to all three harnesses, such that the processes are interchangeable.

There are three basic timing elements for things you might want to accomplish offline from your website. The first timing element I would refer to as “do something repeatedly every so often”. The second timing element I would refer to as “do something on a scheduled basis”, whether that schedule is every few hours or once a day or even once a month. The last timing element I generally prepare for is a “one off” or “one time” execution.

It should be noted that this sort of architecture presupposes that you have full control of your computing environment either through ownership of the servers or access via some sort of cloud computing or virtual hosting service, such that you can install and run items from the console or command line. Obviously this would not be possible if all you had was a web host for your website.

Under those assumptions, I will create three things. First, I will create a Windows Service. This service’s sole purpose would be to take a configured set of objects that implement my interface and run them repeatedly at a specified interval, say once every three minutes. A good example of this might be a process that monitors an email inbox for new messages and processes them in some fashion. Because this is a Windows Service, it might be wise to give each object its own thread, or if you are the latest .NET platform, its own Task.

Second, I will create a standalone console application that I intend to schedule to run regularly. This console application will also load up a configured set of interfaced objects and run them a single time when the scheduled task executes. A good example of this might be some sort of nightly statistical analysis that needs to be done for reporting. In the same sense as the Windows Service, if you have a lot of objects, it might be wise to allow for sequencing some of them in order while noting which ones are truly independent, and then multi-threading them or assigning them Tasks in the proper order.

Last, I will create a near-replica of the standalone console application above, but likely without the multi-threading in place as this is considered to be a one time execution. The application above and this one might even go so far as to be exact copies deployed separately, with one scheduled and one not, if the different requirements for each don’t stray. Common uses of this would be for one-time data conversions, say for example you had inherited a poor database structure that had a person’s name all in one field and you wanted to split into first and last name.

Once I have these three harnesses built, it becomes very easy to generate a plug-and-play approach to any tasks that would need to be scheduled or executed in a predictable fashion without constantly creating new services or new executables to handle the work. This would then allow me to very easily move tasks that my web project might eventually find overwhelming or detrimental to performance off into background tasks without constantly reinventing the wheel. It also allows me, if I so choose, to install or write some code to monitor the execution of these harnesses without having to rewrite the monitoring code every time as well.

This approach has been helpful on several projects and saved me a lot of work down the line.

Beware how much you “help” people in the name of the next great usability idea.

July 26, 2012 Leave a comment

I have been involved in many debates about usability over the course of my career. Indeed, each time a new design paradigm or some sort of design meme is introduced, there is always the discussion about how much it helps and whether it is worth the effort to implement it. In particular this becomes a problem on mature user interfaces with larger customer bases that are already familiar with how to get their tasks done via the current UI.

Recently I encountered the same sort of potential risk when driving a rental car. I’m driving a brand spankin’ new fully loaded Ford Explorer right now while on vacation. It’s a great vehicle really, with an enhanced digital dashboard, on-board cameras for backing up, the whole shebang. Handles well, is quiet. I’d consider buying one myself.

Except someone at Ford decided to monkey with the directional signal.

I trust you are aware of the directional signal. It’s the little lever on the left of the steering wheel that you click up into place to indicate a right turn, or click down into place for a left turn. Once you turn the steering wheel back straight, the level clicks back into its original position automatically.

Not so on the new Ford Explorer. On the Ford Explorer, there’s no click into place; when you push it up, it doesn’t stay up. So naturally you immediately think it’s broken. And then once you realize it’s not broken, you discover that the length of time the turn signal stays on is driven by how hard and how long you hold the signal lever in place. If you don’t hold it long enough, it blinks three times and then turns off.

Not only is it extremely confusing, but it goes against every other directional signal design on the road, and in some cases it’s dangerous. What it doesn’t do is make my signal turning easier, even though I can only assume that was the intention of whoever came up with this.

This is a cautionary tale for anyone looking to incorporate the next great design idea, or try to help their customers do things easier than they’ve done before. It’s a good idea to understand the satisfaction level of your customers with your current UI, as well as how your customers are going to react to the changes you implement, especially if they go in the face of current norms that permeate the web today. You run the risk of alienating the people you are trying to help if your designs are not truly intuitive to the people using them, or if you sacrifice familiarity for the next great design concept.

The last thing you want is someone on your website wondering why the turn signal is broken.

Why can’t web crawlers use some ethics…and some intelligence?

July 17, 2012 Leave a comment

So I’ve been up since 4am this morning battling what is essentially a Distributed Denial of Service attack…basically a bunch of computers sending requests to our web servers over and over and over again. After two hours of battling, the culprit was found and disabled.

This company offers to crawl data on websites via some customizable code. However, their business practices are definitely questionable. A google search is most enlightening.  This web crawler hit our site over 7,000 times in a 10 minute span. And based on that Google search, we are not the only ones.

Now there are a couple of things I simply don’t understand. First of all, who’s the genius at that thinks hitting any site on the web at this volume is a good idea? I understand that they have a business, that they are selling crawling technology, but how much do they expect to sell if the end result of implementing their crawler is the immediate block of the crawler by the unwitting victim? Certainly whoever is paying them to crawl our site is now going to be disappointed.

Second, why would anyone think that this sort of crawling is ethical in this day and age of botnets and hackers? If I was building a business based on this technology I would at a minimum make sure targets could remove themselves from the line of fire (80legs claims it does so but it doesn’t work…they don’t honor robots.txt like they say the do), and make sure my bot speed was within reason speed wise. Google, Bing, and Yahoo all crawl the web without causing mass chaos and overwhelmed servers. Certainly if you have the intelligence to write a crawler, you have the intelligence to throttle a crawler.

Or maybe my standards are too high.