Thursday, July 19, 2007

ASP.NET Orcas: Quiet Revolution

A few months ago, Microsoft released Beta 1 .NET Framework 3.5 and Visual Studio 2008 codename Orcas (http://www.tinyurl.com/269434 ). Microsoft made the version a bit whacky for this release cycle and if you haven't been paying close attention you wonder whether you somehow missed ASP.NET 3.0 along the way. You didn't. Microsoft didn't make an ASP.NET 3.0 update. The .NET 3.0 release cycle focused on add-on libraries that run on top the .NET runtime 2.0. The 3.0 updates brought Windows Presentation Foundation (WPF), Windows Workflow (WF), and Windows Communication Foundation (WCF), and they all consisted entirely of new libraries that run on top of the 2.0 runtime, which itself wasn't updated by the 3.0 release. For ASP.NET this means that .NET 3.0 really had no effect on the existing feature set and functionality. Daniel Moth sums up the confusing runtime arrangements from 2.0 to 3.5 (http://www.tinyurl.com/ywdp92) in a blog post.

.NET 3.5 will be the next major release of the .NET runtime and it does change the base libraries as well as add a number of new libraries to the core runtime. However, even with these updates, changes to ASP.NET 3.5 are relatively minor. Most of the key feature enhancements in .NET 3.5 are related to core runtime and language enhancements, especially those that relate to Language Integrated Query (LINQ) and its supporting features.

A Different Kind of Update

When Microsoft released ASP.NET 2.0, they made some pretty major changes from version 1.1. Microsoft changed almost every aspect of the HTTP runtime and the core ASP.NET engine changed including a whole new compilation model and new project types. Although you could run 1.x applications with very few (if any) changes, making these applications into full blown 2.0 applications required a host of changes and updates. ASP.NET 2.0 introduced a tremendous amount of new functionality-new controls, many new HTTP runtime features, provider models for many aspects of the runtime. It's probably not an overstatement to say that Microsoft's release of ASP.NET 2.0 had huge changes that many of us are still digesting one and half years later.

ASP.NET 3.5, on the other hand, looks to be a much less heavy-handed update and most of the changes are evolutionary rather than revolutionary. In fact, as far as pure runtime features are concerned, not much will change at all. A good number of enhancements that the ASP.NET team will roll into the Orcas release have to do with more tightly integrating technologies that are already released and in use. This includes direct integration of the ASP.NET AJAX features into the runtime and full support for IIS 7's managed pipeline (which by its own right could have been ASP.NET 3.0!).

For you as a developer, this means that moving to Visual Studio 2008 and ASP.NET 3.5 is probably going to be a lot less tumultuous than the upgrade to 2.0. I, for one, am glad that this is the case as the change from 1.1 to 2.0 involved a lot of work for conversion. In my early experimentation with Orcas Beta 1 I've migrated existing applications painlessly to .NET 3.5 and could immediately start adding new 3.5 features without that horrible "legacy" feeling about my code that typically comes with a major .NET version upgrade.

That isn't to say that .NET 3.5 will not have anything new or the potential for making you *want* to rewrite your applications, but getting a raw 2.0 application running and compiling properly under Visual Studio 2008 either for .NET 2.0 or 3.5 will be very easy.

ASP.NET 3.5: Minimal Changes

ASP.NET 3.5 doesn't include a ton of new features or controls. In fact, looking through the System.Web.Extensions assembly I found only two new controls: ListView and DataPager.

The ListView control is a new data-bound list control that is a hybrid of a GridView and Repeater. You get the free form functionality and templating of a Repeater with the data editing, selection, and sorting features of a data grid, but without being forced into a tabular layout. The ListView supports layout for table, flow layout, and bullet list layout so it offers a lot of flexibility and some built-in smarts for the type of layout used. All of this provides more flexibility and more control over your layout than either the GridView or the Repeater has on its own. The DataPager control works in combination with a ListView to provide custom paging functionality with the DataPager acting as behavior for the ListView.

The System.Web.Extensions assembly contains all of the new functionality for ASP. System.Web.Extensions and should be familiar since it also contains the ASP.NET AJAX features that Microsoft has already made available as part of the ASP.NET AJAX distribution. With .NET 3.5, System.Web.Extensions becomes a native part of .NET 3.5 so you don't have to make a separate installation to use ASP.NET AJAX features. Unfortunately it looks like you must still make the ugly configuration settings required by the AJAX features for web.config-I had hoped that the ASP.NET team could have moved these settings into the machine-wide web.config settings.

In addition to these core features that Microsoft will add to System.Web.Extensions already, Microsoft has indicated additional pending features in the ASP.NET Futures CTP code (http://www.asp.net/downloads/futures/default.aspx). The ASP.NET Futures release includes code for a host of features, some of which may or may not make it into the actual framework distribution. It includes a number of additional ASP.NET AJAX Client library features, a host of XAML and Silverlight support features, and an interesting dynamic data framework called Dynamic Data Controls ( http://www.tinyurl.com/2ktlzw) which is a Rails-like tool to quickly generate a full data manipulation interface to a database. Microsoft has not indicated which of these features of the Futures release will make it into the shipping version of .NET 3.5, but some will undoubtedly make the cut.

The Real Change Maker Is LINQ

While ASP.NET itself may not make a huge wave of changes, that doesn't mean there won't be plenty of new stuff for developers to learn. To me, the biggest change in .NET 3.5 is clearly the introduction of Language Integrated Query (LINQ) and I bet that this technology, more than anything else, will have an effect on the way developers code applications, much in the way Generics affected .NET 2.0 coding. Like Generics, LINQ will require a little bit of experimenting to get used to features, but once you 'get' the core set of LINQ functionality, LINQ will be hard to live without.

In a nutshell, LINQ provides a new mechanism for querying data from a variety of "data stores" in a way that is more intuitive and less code-intensive than procedural code. With strong similarities to SQL, LINQ uses query parsing techniques within the compiler to reduce the amount of code you have to write for filtering, reordering, and restructuring data. With LINQ you can query any IEnumerable-based list, a database (LINQ to SQL http://www.tinyurl.com/34bnwh) and XML (LINQ to XML http://www.tinyurl.com/2lrofy). You can also create new providers so, in theory, LINQ can act as a front end for any data store that publishes a LINQ provider.

LINQ generally returns an IEnumerable-based list of data where the data is specific to the data you are querying. What's especially nice is that the returned data can be in an inferred format so that you can get strongly-typed data returned from what is essentially a dynamic query. LINQ can either run query results into dynamically constructed types that the compiler generates through type inference, or by explicitly specifying an exact type that matches the query result signature.

LINQ makes it possible to use a SQL-like syntax to query over various types of lists. While this may not sound all that exciting, take a minute to think about how you often deal with list data in an application in order to reorganize the data by sorting or filtering that data. While it's not too difficult to do this using either raw code with a foreach loop or using .NET 2.0 predicate syntax on the generic collection types, it's still pretty a pretty verbose process that often splits code into multiple code sections. LINQ instead provides a more concise and dynamic mechanism for re-querying and reshaping data. I have no doubt it will take some time to get used to the power that LINQ offers languages like C# and Visual Basic, but I bet it will end up making a big change in the way that developers write applications.

For example, think of simply querying a list of objects (List<Customer> in this case) for a query condition like this:

var CustQuery =
    from in Customers
    where c.Entered >
       DateTime.Now.AddDays(-1) &&
        c.Company.Contains("Wind")
    select new { c.Name, c.Company, 
       Date =  c.Entered };

LINQ queries are actually just the definition of the actual query. Nothing actually executes the query to retrieve the data until the data is selected or otherwise accessed. Internally LINQ uses a .Select() extension method on a collection to cause the data to be retrieved and served.

The actual result is a dynamically typed IEnumerable List<AnonymousType> through which you can simple run through with foreach:

foreach (var in CustQuery) 

   Response.Write(c.Company + " - " + 
                  c.Name + " - " + 
                  c.Date.ToString() + 
                  " <hr>");
}

Microsoft has made the LINQ syntax a lot more compact than similar procedural code and, to me at least, easier to understand just looking at that code block. What takes a bit of getting used to is just how many things you can actually apply LINQ to as it works with just about any kind of list.

If you look at the small bit of code above you see quite a few new features of the C# language (Visual Basic has most of these same features), which may seem a little unnerving. In order for LINQ to work, Microsoft needed to make a number of modifications to the language compilers and the core runtime. These features that make LINQ work require language features that rely on type inference and the ability to easily shortcut certain verbose language statements like object initialization and delegate-based expression evaluation. Here's a quick run through of some of the more important language-related features in C# that are related to LINQ but useful in their own right.

Anonymous Types and Implicit Typing

Anonymous types are types that are created without explicitly specifying each one of the member types attached to it. They are basically a shortcut for object creation in which the compiler figures out the type of the specified member based on the value assigned to it.

This is a crucial feature of LINQ which uses it to construct dynamic result types based on the query's result fields that are chosen. You construct anonymous types like this:

var Person = new 
    Name = "Rick Strahl", 
    Company = "West Wind", 
    Entered = DateTime.Now 
};

The var type is an anonymous type that is compiler-generated and has local method scope. The type has no effective name and can't be constructed directly, but it is returned only as the result of this anonymous type declaration. The type is generated at compile time, not at runtime, so it acts just like any other type with the difference that it's not visible outside of the local method scope.

Var types are most commonly used with objects, but they can also be used with simple types like int, bool, string etc. You can use a var type in any scenario where the compiler can infer the type:

var name = "Ken Dooit";

Here the compiler creates a string object and any reference to name is treated as string.

By itself this feature sounds like a bad idea-after all, strong typing and compiler safety is a feature of .NET languages that most of us take for granted. But it's important to understand that behind the scenes the compiler still creates strongly typed objects simply by inferring the type based on the values assigned or parameters expected by a method call or assignment.

I doubt I would ever use this feature with normal types in a method, but it really becomes useful when passing objects as parameters-you can imagine many situations where you create classes merely as message containers in order to pass them to other methods. Anonymous types allow you to simply declare the type inline which makes the code easier to write and more expressive to read.

This is important for LINQ which can return results from a query as a var result and often needs to create types and result values dynamically based on the inputs specified. With LINQ, a query result is var IEnumerable<A> where A is the anonymous type. The type works as if it were a normal .NET type and Visual Studio 2008 is smart enough to even provide you IntelliSense for the object inference. So you can have code like the following (assuming Customer is a previously defined class):

List<Customer> Customers = new 
List<Customer>();
Customers.Add(new Customer { 
Name="Rick Strahl", 
     Company = "West Wind", 
     Entered = 
     DateTime.Now.
     AddDays(-5) } ); 
Customers.Add(new 
Customer { Name = 
   "Markus Egger", 
   Company = "EPS", 
     Entered = DateTime.Now });
… add more here

var CustQuery =  from in Customers
                where c.Entered > 
                      DateTime.
                      Now.AddDays(-1)
                 select new { c.Name, 
                      c.Company, 
                      Date = 
                      c.Entered };

foreach (var in CustQuery) 

    Response.Write (c.Company + " - " 
    + c.Name + 
    " - " + c.Date.ToString() + " 
    <hr>");
}

You'll notice the var result, which is an IEnumerable of an anonymous type that has Name, Company, and Entered properties. The compiler knows this and fixes up the code accordingly, so referencing c.Company resolves to the anonymous type's Company field which is a string.

It's important to understand one serious limitation of this technology: Anonymous types are limited to local method scope, meaning that you can't pass the type out of the method and expect to get full typing semantics provided outside of the local method that declared the var. Once out of the local method scope, a var result becomes equivalent to an object result with access to any of the dynamic properties available only through Reflection. This makes dynamic LINQ results a little less useful, but thankfully you can also run results into existing objects. So you could potentially rewrite the above query like this:

IEnumerable<Customer> CustQuery = 
        from  in Customers
        where  c.Entered > 
               DateTime.Now .
               AddDays(-1) && 
               c.Company.
               Contains("Wind") 
        select new Customer { 
               Name= c.Name, 
               Company=c.Company };

Notice that the select clause writes into a new Customer object rather than generating an anonymous type. In this case you end up with a strong reference to a Customer object and IEnumerable of Customer. This works because the constructor is essentially assigning the values that the new object should take and you end up essentially mapping properties from one type of object to another. So if you have a Person class that also has a Name and Company field, but no Address or Entered field, you can select new Person() and get the Name and Company fields filled.

IEnumerable<Person> CustQuery = 
        from in  Customers
        where  c.Entered > 
          DateTime.Now.AddDays (-1) &&
               c.Company.Contains(
               "Wind") 
        select new Person { 
        Name=c.Name, 
        Company=c.Company  };

You can then pass values returned with this sort of strong typing out of methods because they are just standard .NET classes.

Object Initializers

In the last queries above, I used object initialization syntax to assign the name and company in the Person class. Notice that I could simply assign any property value inside of the curly brackets. It's a declarative single statement way of initializing an object without requiring a slew of constructors to support each combination of property assignment parameters. In the code above the object is initialized with the values of the object that is currently being iterated-c or a Customer instance in this case.

This is a great shortcut feature that makes for more compact code, but it's also crucial to get the compact syntax required in LINQ to make property assignments work in the select portion of a LINQ query. Object initializers, in combination with an anonymous type, effectively allow you create a new type inline of your code, which is very useful in many coding situations.

Extension Methods

Extension methods allow extension of existing in-memory objects through static methods that have a specific syntax. Here's an extension method that extends the string with an IsUpper function in C#:

public static class StringExtensions
{
    public static bool IsUpper(
       this string s)
    {
        return s.ToUpper() == s; 
    }
}

This syntax assigns the extension method to a string object by way of the implicit this parameter and the type of the first parameter. The first parameter is implicit and it's always an instance of the type specified in the first parameter. To use the IsUpper function on the any string instance you have to ensure that the namespace is visible in the current code and, if it is, you can use the extension method like this:

string s = "Hello World";
bool Result = s.IsUpper();

The extension method is scoped to the string class by way of the first parameter, which is always passed to an extension method implicitly. The C# syntax is a little funky and Visual Basic uses an <Extension()> attribute to provide the same functionality. Arguably I find the Visual Basic version more explicit and natural-that doesn't happen often.

Behind the scenes, the compiler turns the extension method into a simple static method call that passes this-the current instance of the class-as the first parameter. Because the methods are static and the instance is passed as a parameter you only have access to public class members.

This seems like a great tool to add behaviors to existing classes without updating the existing classes. You can even extend sealed classes in this way, which opens a few new avenues of extension.

Lambda Expressions

Lambda expressions (http://preview.tinyurl.com/37n9ll) are a syntactic shortcut for anonymous method declarations, and they are what make LINQ filtering and sorting work. In the LINQ code I showed above, I used expressive syntax, which is actually parsed by the compiler and turned into lower level object syntax that chains together the various statement clauses into a single long expression.

One of the sections of a LINQ query deals with the expression parsing for the where clause (or more precisely the .where() and .sort() methods). So this code:

IEnumerable<Customer>  custs1= 
    from in  Customers
    where c.Entered > 
       DateTime.Now.AddDays(-1) 
    select c;

is really equivalent to:

IEnumerable<Customer> custs1 = 
    Customers.Where( c => c.Entered >
    DateTime.Now.AddDays(-1));

which is also equivalent to:

IEnumerable<Customer> custs2 = 
                    Customers.Where(
  delegate(Customer c) 
                    {  return 
c.Entered > 
   DateTime.Now.AddDays(-1); });

So you can think of a lambda expression as a shortcut to an anonymous method call where the left-most section is the parameter name (the type of which is inferred). Lambda expressions can be simple expressions as above, or full code blocks which are wrapped in block delimiters ({ } in C#).

Behind the scenes, the compiler generates code to hook up the delegate and calls it as part of the enumeration sequence that runs through the data to filter and rearrange it.

Lambda expressions can be presented as delegates which is the functional aspect and deals with how they actually get executed. However, you can also assign them as an Expression<Func<>> object which makes it possible to parse the LINQ expression. This low-level feature can be used to implement custom LINQ providers that can do things like LINQ to SQL and LINQ to XML. These technologies take the inbound expressions and translate them into parsing completely separate data stores like a SQL database (in which case the LINQ is translated into SQL statements) or LINQ to XML (in which case XPATH and XmlDocument operations are used to retrieve the data).

It's a very powerful feature to say the least, but probably one that's not going to be used much at the application level. Lambda expressions are going to be primarily of interest for framework builders who want to interface with special kinds of data stores or custom generated interfaces. There are already custom interfaces springing up such a LINQ provider for LDAP ( http://www.tinyurl.com/3e5bqv) and nHibernate (http://www.tinyurl.com/2udq9a).

LINQ and SQL

One of the major aspects of LINQ is its ability to access database data using this same LINQ-based syntax. Using SQL syntax natively, as opposed to SQL strings coupled with the inferred typing and IntelliSense support that LINQ provides, makes it easier to retrieve data from a database. It also helps reduce errors by providing type checking for database fields to some degree.

LINQ to SQL

You can use LINQ with a database in a couple of ways. The first tool, LINQ to SQL, provides a simplified entity mapping toolset. LINQ to SQL lets you map a database to entities using a new LINQ to SQL designer that imports the schema of the database and creates classes and relationships to build a simple object representation of the database. The mapping is primarily done by class attributes that map entity fields to table fields, and child objects via foreign keys and relations.

Visual Studio 2008 will ship with a LINQ to SQL designer that lets you quickly select an entire database or individual tables and lets you graphically import and map your database.

By default the entire database is imported and the mappings are one to one where each table gets to be a class and each field a property. Relationships are mapped as child collections or properties depending on whether it's 1 to 1 or 1 to many. You can also choose to import either an entire database or selected tables from the database.

Once the mapping's been done, the schema is available for querying using standard LINQ syntax against the object types that the mapper creates. The objects are created as partial classes that can be extended and these entity classes are used as the schema that enforces the type safety in the query code.

LINQ to SQL is a great tool for providing a very quick CRUD access layer for a database to provide for simple insert/update/delete functionality using entity objects. It also provides for the strong typing in LINQ queries against the database which means better type safety and the ability to discover what database tables and fields are available right in your code. LINQ to SQL works through a DataContext object, which is a simplified OR manager object. You can also use this object to create new object instances and add them to the database or load and update individual items for CRUD data access. Since CRUD data access often amounts to a large part of an application, this is a very useful addition.

To me, LINQ to SQL is a big improvement over a strongly-typed dataset (if you are using them). It provides much of the functionality that a strongly-typed dataset provides in a more lightweight package and more intuitive manipulation format.

But there's probably not enough here to make pure Object Relational Mapper diehards happy. The attribute-based model and the nature of the current generator that doesn't do non-destructive database refreshes for entities are somewhat limiting, but still it's a great tool to quickly generate a data layer and address the 60-80% of data access scenarios that deal with CRUD operations.

ADO.NET Entity Framework - LINQ to Entities

For those more diehard ORM folks, Microsoft is also working on a more complete object relational framework called ADO.NET Entity Framework (http://www.tinyurl.com/2jbxo2). The Entity framework sports a more complete mapping layer that is based on XML map files. Several map files are used for the physical and logical schema and a mapping layer that relates the two. The Entity framework also integrates more tightly with ADO.NET using familiar ADO.NET objects such as EntityConnection, EntityCommand, and EntityReader to access data. You can query Entity framework data in a number of ways including using LINQ as well as a custom language called Entity Sql, which is a T-SQL-like language for retrieving object data. I'm not quite clear on why you'd need yet another query language beyond raw SQL and beyond the LINQ mapping to confuse the lines even more but it seems to me that LINQ would be the preferred mechanism to query data and retrieve it. The Entity framework also utilizes a data context that LINQ can use for database access so Entity framework objects are directly supported through LINQ.

The Entity Framework provides more control over the object relational mapping process, but it's also a bit more work to set up and work with. Microsoft will ship the framework with Orcas but it appears that rich design-time support will not arrive until sometime after Orcas is released. Currently there's a one-time wizard that you can run to create your schemas and mapping layers but after that you have to manually resort to managing the three XML files that provide the entity mapping. It'll be interesting to keep an eye on the toolset and see how it stacks up against solutions like nHibernate.

Major Visual Studio Changes

Finally, let's not forget Visual Studio 2008 for the Orcas release cycle. I am happy to see that this version of Visual Studio is not a complete overhaul of the entire environment. Microsoft will add many new features, but the way that projects work will basically stay the same. Visual Studio 2008 also uses the same add-in model as previous versions so most of your favorite plug-ins, templates, and IntelliSense scripts should continue to work in Visual Studio 2008.

Nevertheless there are also some major changes under the hood for ASP.NET.

New Html Editor

I don't think I know anybody who actually likes the ASP.NET HTML designer in Visual Studio 2005. It's slow as hell and has a number of issues with control rendering. For me it's become so much of a problem that I rarely use the designer for anything more than getting controls onto a page. I then spend the rest of my time using markup for page design. It's not that I don't want to use a visual designer, but it's tough when the designer is so slow and unpredictable.

Visual Studio 2008 introduces a brand spanking new designer based on the designer used in Expression Web. Microsoft completely redesigned the editor and, more importantly, they didn't base it on the slow and buggy HTML edit control. The new editor provides a host of new features including a much more productive split pane view that lets you see both markup and WYSIWYG displays at the same time in real time. You can switch between the panes instantly for editing and see the changes in both and there's no hesitation when moving between them. The editor's rendering appears to be more accurate than the old one and maybe-even more important-the editor is considerably faster at loading even complex pages. Considerably faster! Even complex pages that contain master pages and many controls render in less than a second as opposed to 10+ seconds in Visual Studio 2005. Further, because you have split view that shows both design time and markup, you rarely need to switch view modes.

The property sheet now also works in markup view when your cursor is hovering over a control, which also makes markup view more usable. Microsoft added this feature in Visual Studio 2005 SP1 and it didn't work reliably there, but it works perfectly in Visual Studio 2008 including the ability to get to the event list from markup.

There's also much deeper support for CSS, which starts with the editor giving you a list of all styles available from all inline styles and external style sheets. The list shows a preview of each style. A CSS properties page also lets you examine which CSS classes are applied to a given HTML or control element and lets you then quickly browse through the styles to see which attributes apply and are overridden. The CSS tools are a little overwhelming because they live on three different windows and have several options to give different views. It takes a little experimenting to figure out all that is available and which of the windows you actually want to use, but it's well worth the effort.

Improved JavaScript

One highly anticipated feature of Visual Studio 2008 is the improved JavaScript support. With the advent of AJAX, developers will write a lot more JavaScript code in the context of ASP.NET applications and Visual Studio 2008 provides a number of new editor features that will help the minimal JavaScript that Visual Studio has supported.

Visual Studio 2008 provides improved IntelliSense support. Visual Studio supported IntelliSsense for JavaScript prior to Visual Studio 2008 but it is extremely limited-it basically worked for a few known document objects with some static and incomplete member lists. Visual Studio 2008's support for JavaScript is a more dynamic and can understand JavaScript functions defined in the page or in external JavaScript files that are referenced either via the <script src> tag or by using an ASP.NET AJAX ScriptManager control to add the scripts. JavaScript (.js) files can reference other scripts by using special syntax at the top of the .js file to reference another .js source file.

While the JavaScript IntelliSense works well enough with local functions and reasonably well with the ASP.NET AJAX libraries that are documented and follow the exact ASP.NET AJAX guidelines, I've had no luck at all getting the new IntelliSense features to work with other libraries. For example, opening Prototype.js as a related .js file on a page results in no IntelliSense whatsoever on Prototype's functionality. None of the classes, or even static functions, show up in IntelliSense. It appears that Visual Studio 2008's IntelliSense only follows the exact guidelines that ASP.NET AJAX requires for class definitions in order to show up in IntelliSense. I sincerely hope that some of these issues get addressed because as it stands currently in Beta 1, the new IntelliSense features in Visual Studio 2008 don't help me at all with my JavaScript development.

I'm also disappointed that Visual Studio still does not offer support for some sort of JavaScript function/class browsing utility. I frequently look at rather large JavaScript library files and there's just no good way to browse through the content short of raw text view and using Find. A class/function browser that shows all functions or better yet objects and members (which is more difficult) would certainly make navigating a large library file much easier. No such luck.

On a more positive note, Microsoft directly integrated JavaScript into the ASP.NET debugging process. You can now set breakpoints in JavaScript code in the IDE without having to sprinkle debugger; keywords throughout your code. Running the project will now automatically respect your JavaScript breakpoints in HTML, ASPX pages, and any .js files that are part of the project. The Locals and Watch windows also provide more useful and organized information on objects with class members sorted by type (methods and fields). The debugging experience is seamless so you can debug both client- and server-side code in a single session. This is a great improvement for JavaScript debugging and has made me go back to using Visual Studio script debugging recently. While I won't throw out my copy of FireBug just yet, I find that for logic debugging the experience is much smoother when directly integrated in Visual Studio 2008.

Multi-Targeting

I want to mention one Visual Studio feature that is not specific to ASP.NET but it's one of the more prominent features that I think will make the transition to Visual Studio 2008 much easier. Visual Studio 2008 supports targeting multiple versions of the .NET Runtime so you're not tied to a particular version. You can create projects for .NET 2.0, 3.0, and 3.5, and when you choose one of these project types Visual Studio will automatically add the appropriate runtime libraries to your project. You can also easily switch between versions so you can migrate a Visual Studio 2005 project to Visual Studio 2008 with .NET 2.0, work on that for a while, and then at a later point in time decide to upgrade to .NET 3.5.

This should make it much easier to take advantage of many of the new features of Visual Studio-the new editor and the JavaScript improvements for example-even if you continue to build plain .NET 2.0 applications for some time to come.

Closing Thoughts

I've been running Visual Studio 2008 Beta 1 since Microsoft released it in February and I find its overall performance and stability pleasantly surprising. Yes there are parts that are a little unstable, but unlike previous .NET and VS.NET betas, the Visual Studio 2008 beta feels decidedly solid; Visual Studio 2008 is not only usable, but more usable in many ways than Visual Studio 2005. Visual Studio 2008 as a whole feels snappier. The new HTML and markup editor alone is worth the trouble. While Visual Studio 2008 Beta 1 still has some issues, most are minor and outright crashes very rare-in fact, I haven't crashed Visual Studio 2008 any more than I crash Visual Studio 2005. Even many of the new designers, such as LINQ to SQL Entity designer, work well at this point.

I'm excited about many features in Visual Studio 2008 and although, so far, this release of .NET 3.5 and Visual Studio 2008 has not received the same hype that Visual Studio 2005 received (thankfully), I think this release is turning out to be solid and it brings many new practical features to the framework-as well as improved tools support all in a way that isn't overwhelming. Personally I prefer this more moderate update approach and so far it's working out great in that I can use the new technology today with my .NET 2.0 projects while at the same time being able to experiment with the new features for .NET 3.5.

As of the beginning of July I'm still using Beta 1 of .NET 3.5 and Visual Studio 2008. Microsoft has hinted that Beta 2 is on its way before the end of the month and there's likely to be a go-live license included so you can start thinking about using .NET 3.5 and getting some code out on the Web for real if you choose. Final release has just been announced for February 27th of 2008 with release to manufacturing expected at the end of this year. Given the relative stability of the features and functionality it looks like all of this might actually happen on time too. I'm often critical of the way things are pushed out of the Microsoft marketing machine, but I think this time around Microsoft has struck a good balance and rolled things out at a pace that's actually works well. Right on!
 

0 comments: