Posts

 

High quality dynamically resized images with .net

This is a old post taken from my previous blogging system unfortunately I have temporary lost all the great comments with coding examples that people posted on the subject. Its worth checking out Nathanael Jones post on different pitfalls to avoid in image resizing and his module.

A lot of web sites make use of code which dynamically resizes images. This technique is great for producing thumbnails on the fly. In fact, I used it for the listing pages of this blog. I was a little disappointed with the quality. The image looked blurred and I could often see dithering or compression artefacts. While working on another project I have spent some time researching how you can increase image quality while resizing with .Net.

.Net provides a of set easy to use image manipulation classes called System.Drawing (GDI+). The default settings for these classes are based on speed not quality. There are many examples on the web of how to resize images using these classes, but most of them produce low quality images. The most basic example would have the following code.


System.IO.MemoryStream memoryStream = new System.IO.MemoryStream( byteArray );
System.Drawing.Image image = System.Drawing.Image.FromStream( memoryStream );
System.Drawing.Image thumbnail = new Bitmap(newWidth, newHeight);
System.Drawing.Graphics graphic = System.Drawing.Graphics.FromImage( thumbnail );
graphic.DrawImage(image, 0, 0, newWidth, newHeight);
thumbnail.Save(Response.OutputStream,System.Drawing.Imaging.ImageFormat.Jpeg);

You start with a byte array which contains the image data loaded from either a database or file. You then resize the bitmap using a number of System.Drawing methods and finally save the bitmap to an output stream. In this case the output stream is the Response output stream. The first improvement is to include a quality setting for the JPEG compression.


System.Drawing.Imaging.ImageCodecInfo [] Info = System.Drawing.Imaging.ImageCodecInfo.GetImageEncoders();
System.Drawing.Imaging.EncoderParameters Params = new System.Drawing.Imaging.EncoderParameters(1);
Params.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
Response.ContentType = Info[1].MimeType;
thumbnail.Save(Response.OutputStream,Info[1],Params);

Major benefits can be gained by resetting the Graphic object properties to use the most effective algorithms. The InterpolationMode especially effects resizing, the others are useful for when you are using any compositing methods. As we are using the DrawImage method which is a compositing method you should include all four properties.


graphic.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphic.SmoothingMode = SmoothingMode.HighQuality;
graphic.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphic.CompositingQuality = CompositingQuality.HighQuality;

By default, .Net will utilize a web-safe palette when converting a bitmap to an image suitable for a web page. The result is that most gif files created using the above code would produce badly dithered images. Morgan Skinner wrote a fantastic article “Optimizing Color Quantization for ASP.NET Images” on how to create the optimal palettes for gif output. If you wish to output gif files you should include his code in your project and modify your code to fork when creating gif or jpeg output.


System.IO.MemoryStream memoryStream = new System.IO.MemoryStream( byteArray );
System.Drawing.Image image = System.Drawing.Image.FromStream( memoryStream );
System.Drawing.Image thumbnail = new Bitmap( newWidth, newHeight );
System.Drawing.Graphics graphic = System.Drawing.Graphics.FromImage( thumbnail );

graphic.InterpolationMode = InterpolationMode.HighQualityBicubic; graphic.SmoothingMode = SmoothingMode.HighQuality; graphic.PixelOffsetMode = PixelOffsetMode.HighQuality; graphic.CompositingQuality = CompositingQuality.HighQuality; graphic.DrawImage(image, 0, 0, newWidth, newHeight); if( contentType == "image/gif" ) { using ( thumbnail ) { OctreeQuantizer quantizer = new OctreeQuantizer ( 255 , 8 ) ; using ( Bitmap quantized = quantizer.Quantize ( bitmap ) ) { Response.ContentType = "image/gif"; quantized.Save ( Response.OutputStream , ImageFormat.Gif ) ; } } } if( contentType == "image/jpeg" ) { info = ImageCodecInfo.GetImageEncoders(); EncoderParameters encoderParameters; encoderParameters = new EncoderParameters(1); encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, 100L); Response.ContentType = "image/jpeg"; thumbnail.Save(Response.OutputStream, info[1], encoderParameters); }

All this additional processing can add load time. On commercial sites the developers at my company often build-in a caching mechanism for commonly requested sizes.

Although System.Drawing provides a GetThumbnailImage method it only works well when the requested thumbnail image has a size of about 120 x 120 pixels. If you request a large thumbnail image, there could be a noticeable loss of quality. This function also searches for thumbnails stored in the image data which can cause problems if the original was generated with a digital camera. Because of these issues I use the DrawImage method.

Web Adverts vs Web Standards

As a web user, I have come to accept that without advertising revenue some of my favourite sites would not exist. As a designer, I have also come to accept that integrating and balancing advertising needs is a fundamental part of commercial web design.

The problem with web adverts is their total disregard for usability and web standards. Over the last couple of years we have seen some major commercial publishers move to a web standards approach. For the designers of these sites one of the biggest issues is integrating the advertiser’s code.

With the growth of rich media adverts we have moved away from inserting simple banner images and hyperlinks to the extensive use of JavaScript. This type of code is very intrusive, and although it can often degrade gracefully, it causes many rendering problems.

Today, most adverts have to be inserted using inline blocks of JavaScript. This code calls external JavaScript files on others servers. These files can be chained so that one file calls another repetitively. The last file will write HTML into the page using the document.write function.

Some of the issues are:

  • Obtrusive inline JavaScript blocks
  • Mixes structural code and behaviour
  • The advertisement HTML is not written to any DOCTYPE or standard
  • Makes it impractical to use the document onload event to add other behaviours
  • A large amount of code is loaded even if no adverts are displayed
  • Can compromise the semantic structure of pages
  • Uses the document.write function instead of the DOM model
  • The JavaScript often uses browser detection instead of object detection

Over the last few months I have noticed a growth in the number of JavaScript errors and CSS rendering problems. These are often caused because advertiser’s code is generic and unsuitable for standard-based web pages (HTML4 or XHTML).

In fact the problem is that this methodology is blind to the structure of the page into which it will insert its HTML. There are some major differences between XHTML rending in strict compliance mode and old HTML coded sites. CSS layouts do not render exactly the same and some JavaScript will stop working altogether.

In a recent site I designed, the inline JavaScript for adverts made up 30% of the total HTML file size. This code would make at least 8 calls for external JavaScript files, but often only displayed 4 adverts.

Using the document.write function causes the browser to stop rendering while it waits for external assets to be downloaded. This makes the rendering of the page very slow, often the user will start to interact with the document before it has completely loaded. Any JavaScript enhancement which use of the document onload event may take a long time to load. The users find it very disconcerting for enhancements to suddenly be added to a page well after they have started interacting with it.

Over the last year Google has made the AdSense format very popular. This easy-to-use format has spread to personal sites and blogs. AdSense text format gives the impression that it’s lean and clean, but the delivery mechanism is roughly the same as rich media web adverts. It does not answer all the undying issues of obtrusive inline JavaScript blocks and mixing structural code and behaviour.

What you start to realise is that the problem in some part lies with the web adverts management software which is built to serve rich formats through chained JavaScript files.

I believe what is needed is a true syndication model for web adverts. It would not be too much effort to describe all the properties of current web adverts in an XML document. This could then either be used by a server script to add code directly into the HTML or by a DOM Script client side.

If content was syndicated, it could be completely shoe-horned into any document structure correctly. The XML would have to carry the presentation requirements of the advertiser, but the site designer could choose the exact HTML output to match the page. The advertiser may lose a little presentational control, but they would gain a format that would be device independent.

It would always be in the interest of the hosting site to display the adverts to its best ability as the advertisers will always be able to measure the effectiveness of a campaign with click-through metrics.

The technology to create an advert syndication model is well tested and understood. To change, the development community needs to recognise and then stand-up to this current bad practice. There will be resistance to change from the advertising industry, but standardisation is in everyone’s long-term interest.

I would be very interested in other people’s experience of implementing web adverts. If there is already an alternative approach which addresses the issues I have mentioned.

d.Construct 2005

I was lucky enough to get a place at d.Construct 2005 and I am looking forward to what should be a fascinating discussion of Web 2.0 and Ajax. What really interests me is getting involved in a long overdue debate about the design and functionality of modern web-based applications.

I must admit that at the moment I am a bit sceptical about some of the language; “how new technology is transforming the web from a document delivery system to an application platform” and grandiose sounding Web 2.0.

I have created a few web-based applications using remote scripting before the availability of the XMLHttpRequest object. They where based around using the IE Java proxy or reloading hidden frames. This was about five years ago, since then I have always resisted building web interfaces with any type of remote scripting because of compatibility and asynchronous load issues.

The advent of common DOM and the use of the XMLHttpRequest object has sparked a re-evaluation of client server communications. The current buzz around Ajax is starting to make me rethink some of my own objections to using this technology.

I will be going to the event with an open mind hoping that the speakers can convince me there is a new model for building web-based applications. Also that this new model will change how we view web-based applications. It may take a little longer to convince me to use the phraseology (Web 2.0).

The conference is being held in my home town of Brighton, in fact only a few hundred meters from my office. For years Brighton has fostered a growing web design and development community and I hope we will see a few more events like this.

  • Events

DOM Scripting

Aimed at web designers who wish to learn more about using DOM scripting, this book expands the web standards movement into new ground. Jeremy Keith has managed to craft a book that is both a great learning tool for those new to the subject and call to arms to those of us that have always believed that scripting has its place in modern web design.

The book uses a series of tutorials which build slowly to illustrate that you can code unobtrusive JavaScript that progressively enhances the user experience. The language is clear and surprisingly none preaching considering the strength of the undying message. I also like the simplicity of some of the code of which will I am sure I will be using in the future.

Much like CSS web standard approach to presentation, DOM scripting still has issues, but this book goes a long way to providing clear best practice that we can all follow. A small lighthouse for what has been a long and sometimes dark voyage for JavaScript.

  • JavaScript

Data formats:

API