MSDN Mag 0710
MSDN Mag 0710
C# 4.0 UI FRONTIERS
The Fluid UI in Silverlight 4
New C# Features in the .NET Framework 4
Chris Burrows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Charles Petzold page 92
Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics and the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc.
MSDN Magazine (ISSN 1528-4859) is published monthly by 1105 Media, Inc., 9201 Oakdale Avenue,
Ste. 101, Chatsworth, CA 91311. Periodicals postage paid at Chatsworth, CA 91311-9998, and at
additional mailing offices. Annual subscription rates payable in U.S. funds: U.S. $35; Canada $45;
International $60. Single copies/back issues: U.S. $10, all others $12. Send orders with payment
to: MSDN Magazine, P.O. Box 3167, Carol Stream, IL 60132, e-mail [email protected] or
call 847-763-9560. POSTMASTER: Send address changes to MSDN Magazine, P.O. Box 2166, Skokie,
IL 60076. Canada Publications Mail Agreement No: 40612608. Return Undeliverable Canadian
Addresses to Circulation Dept. or IMS/NJ. Attn: Returns, 310 Paterson Plank Road, Carlstadt, NJ 07072.
Printed in the U.S.A. Reproductions in whole or part prohibited except by written permission. Mail requests
to “Permissions Editor,” c/o MSDN Magazine, 16261 Laguna Canyon Road, Ste. 130, Irvine, CA 92618.
Legal Disclaimer: The information in this magazine has not undergone any formal testing by 1105 Media,
Inc. and is distributed without any warranty expressed or implied. Implementation or use of any information
contained herein is the reader’s sole responsibility. While the information has been reviewed for accuracy,
there is no guarantee that the same or similar results may be achieved in all environments. Technical
inaccuracies may result from printing errors and/or new developments in the industry.
Media Kits: Direct your Media Kit requests to Matt Morollo, VP Publishing, 508-532-1418 (phone),
508-875-6622 (fax), [email protected]
Reprints: For single article reprints (in minimum quantities of 250-500), e-prints, plaques and posters contact:
PARS International, Phone: 212-221-9595, E-mail: [email protected], www.magreprints.com/
QuickQuote.asp
List Rental: This publication’s subscriber list, as well as other lists from 1105 Media, Inc., is available
for rental. For more information, please contact our list manager, Merit Direct. Phone: 914-368-1000;
E-mail: [email protected]; Web: www.meritdirect.com/1105
D03 04301A01 Windows applications Download a demo today. FREE VIRTUALIZATION WEBINAR SERIES:
REGISTER TODAY! TechXtend.com/Webinars
$
1,310. 99
programmers.com/grapecity programmers.com/theimagingsource
3,931.99
$
programmers.com/ca compete in a multicore industry. programmers.com/intel through familiar intuitive tools. programmers.com/microsoft
866-719-1528 programmersparadise.com
Prices subject to change. Not responsible for typographical errors.
As we get going on the latest mini-bounce-back have greatly declined, and schools are trying to
on what looks like an extremely long road to eco- reverse the trend. This includes making the
nomic recovery, there is some good news: It looks like programs easier so there will be fewer drop-
the tech sector may have a quicker—and higher— outs and it will be more attractive to students who
bounce than other industries. We’re finally getting don’t want to work hard but still get a degree.”
some news that shows solid, sustained job growth “Woking,” a manager at a “Fortune 500 company,”
in all areas of IT, including software development. is similarly unimpressed. “I have never interviewed
But as Adrian Monk, my favorite TV detective, a candidate right out of college who I would hire. No
would say, “Here’s the thing …” Is this great news for you if you’re recent graduate that I have interviewed has had sufficient
hiring coders but can’t pay them a lot yet, which might mean pluck- understanding of real-world problems to be useful to me, at least
ing fresh fruit off the college tree? Because I’ve been reading some for the salary that the interviewees were expecting.”
worrisome stuff about the quality of education computer science Woking gives a specific example: “Several years ago I interviewed
grads are getting, and the heartburn it’s causing both the grads and candidates for an open position as a data modeler. None of the
potential employers. recent college graduates who had even covered Entity Relationship
My concern was piqued by an article I saw on the InfoWorld Diagramming in their programs had created a data model with more
Web site, “The sad standards of computer-related college degrees” than five entities.” Woking says that they have better success hiring
(bit.ly/blp267). A concerned father wrote in about his daughter’s lack candidates with three to five years work experience, even if the
of preparedness for the world of real work. He writes: “Imagine my applicant lacks a college degree. That’s a pretty damning statement.
surprise (and, as it turned out, her relief) that she could get a four- “Beney,” with 20-plus years experience and no IT degree, puts it
year undergraduate degree in “data processing” without having to succinctly: “Maybe if IT students had to actually write code rather
write a single program in any language! than manipulate IDEs, they’d at least be able to handle the real
“This seems to be a trend,” the writer continues. “In an effort world when they get out into the job market.”
to widen and deepen my own skill set, I have had occasion to Pretty discouraging stuff. What I’d like to do is use the power of
examine computer science course material available online from a the MSDN network to help determine if we’re facing a crisis when
number of top-tier colleges and some from the lower rungs. In most it comes to teaching college kids proper software development
instances, what I remember from my nearly 40-year-old computer skills. If you’re a computer science professor, recent computer
science education still places me far ahead of what they are now science graduate, hiring manager or anyone else with insight into
teaching.” And he concludes: “We’ve had trouble finding qualified this issue, let me know your thoughts at [email protected].
U.S. job applicants who want to do the work we need done. I If you agree that this is a general failing of the education system,
wonder if there’s a connection.” explain how you’d change things: What are the top two or three
The comments from readers accompanying the article support things you’d do? I’m looking forward to reading your responses. After
the writer’s contention, for the most part. Here’s a sampling: all, if there’s a job to be filled, it makes
From “rsr,” who claims to be a former computer science professor: sense that it be filled with a devel-
“Computer Science (and related computer program) enrollments oper who can actually do that job.
Visit us at msdn.microsoft.com/magazine. Questions, comments or suggestions for MSDN Magazine? Send them to the editor: [email protected].
4 msdn magazine
Untitled-1 1 3/10/10 2:49 PM
CUTTING EDGE DINO ESPOSITO
Most of the code written for the Microsoft The .NET Framework 4 introduces
.NET Framework is based on static typ- some new features that enable you to go
ing, even though .NET supports dynamic beyond static types. I covered the new
typing via reflection. Moreover, JScript dynamic keyword in the May 2010 issue
had a dynamic type system on top of (msdn.microsoft.com/magazine/ee336309). In
the .NET 10 years ago, as did Visual this article, I’ll explore the support
Basic. Static typing means that every for dynamically defined types such as
expression is of a known type. Types and expando objects and dynamic objects.
assignments are validated at compile time With dynamic objects, you can define the
and most of the possible typing errors are interface of the type programmatically
caught in advance. instead of reading it from a definition
The well-known exception is when you statically stored in some assemblies.
attempt a cast at run time, which may Dynamic objects wed the formal clean-
sometimes result in a dynamic error if liness of static typed objects with the
the source type is not compatible with flexibility of dynamic types.
the target type.
Static typing is great for performance Scenarios for Dynamic Objects
and for clarity, but it’s based on the assump- Dynamic objects are not here to replace the
tion that you know nearly everything good qualities of static types. Static types
about your code (and data) beforehand. are, and will remain for the foreseeable
Today, there’s a strong need for relaxing Figure 1 The Structure of a Dynamically future, at the foundation of software
this constraint a bit. Going beyond static Created Web Forms Class development. With static typing, you can
typing typically means looking at three find type errors reliably at compile time
distinct options: dynamic typing, dynamic objects, and indirect and produce code that, because of this, is free of runtime checks
or reflection-based programming. and runs faster. In addition, the need to pass the compile step leads
In .NET programming, reflection has been available since developers and architects to take care in the design of the software
the .NET Framework 1.0 and has been widely employed to fuel and in the definition of public interfaces for interacting layers.
special frameworks, like Inversion of Control (IoC) containers. These There are, however, situations in which you have relatively
frameworks work by resolving type dependencies at run time, thus well-structured blocks of data to be consumed programmatically.
enabling your code to work against an interface without having to Ideally, you’d love to have this data exposed through objects. But,
know the concrete type behind the object and its actual behavior. instead, whether it reaches you over a network connection or you
Using .NET reflection, you can implement forms of indirect read it from a disk file, you receive it as a plain stream of data.
programming where your code talks to an intermediate object You have two options to work against this data: using an indirect
that in turn dispatches calls to a fixed interface. You pass the name approach or using an ad hoc type.
of the member to invoke as a string, thus granting yourself the In the first case, you employ a generic API that acts as a proxy
flexibility of reading it from some external source. The interface and arranges queries and updates for you. In the second case, you
of the target object is fixed and immutable—there’s always a have a specific type that perfectly models the data you’re working
well-known interface behind any calls you place through reflection. with. The question is, who’s going to create such an ad hoc type?
Dynamic typing means that your compiled code ignores the static In some segments of the .NET Framework, you already have
structure of types that can be detected at compile time. In fact, good examples of internal modules creating ad hoc types for
dynamic typing delays any type checks until run time. The interface
you code against is still fixed and immutable, but the value you use Code download available at code.msdn.microsoft.com/mag201007CutEdge.
may return different interfaces at different times.
6 msdn magazine
Silverlight, .NET, WPF, WCF, WF, C API, C++ Class Lib, COM & more!
Multimedia
Develop your application Zith the same robust imaging technologies used by
Microsoft, HP, Sony, Canon, Kodak, GE, Siemens, the US Air Force and
Veterans Affairs Hospitals.
J ULIE L ERMAN is a Microsoft MVP, .NET mentor and consultant who lives
in the hills of Vermont. You can find her presenting on data access and other
Microsoft .NET topics at user groups and conferences around the world. Lerman
blogs at thedatafarm.com/blog and is the author of the highly acclaimed book,
“Programming Entity Framework” (O’Reilly Media, 2009). Follow her on
Twitter.com: julielerman.
3 Solutions for
Accessing SharePoint
Data in Office 2010
Donovan Follette and Paul Stubbs
Millions of people use the Microsoft Office client applications via SharePoint Workspace 2010 (formerly known as Groove),
in support of their daily work to communicate, scrub data, crunch direct synchronization between SharePoint and Outlook, the new
numbers, author documents, deliver presentations and make busi- SharePoint REST API and the new client object model. Just as in
ness decisions. In ever-increasing numbers, many are interacting Microsoft Office SharePoint Server (MOSS) 2007, a broad array of
with Microsoft SharePoint as a portal for collaboration and as a Web services is available in SharePoint 2010 for use as well.
platform for accessing shared data and services. In this article, we’ll describe a couple of no-code solutions and
Some developers in the enterprise have not yet taken advantage show you how to build a few more-complex solutions using these
of the opportunity to build custom functionality into Office new features in SharePoint 2010.
applications—functionality that can provide a seamless, integrated
experience for users to directly access SharePoint data from within External Data Sources
familiar productivity applications. For enterprises looking at ways Let’s start by taking a quick look at the SharePoint list types you
to improve end-user productivity, making SharePoint data available can employ as data sources.
directly within Office applications is a significant option to consider. One particularly useful data source is an external list that displays
With the release of SharePoint 2010, there are a number of new data retrieved via a connection to a line-of-business (LOB) system.
ways available to access SharePoint data and present it to the Office MOSS 2007 let you to connect to LOB data using the Business
user. These range from virtually no-code solutions made possible Data Catalog (BDC), which provided read-only access to back-end
systems. SharePoint 2010 provides Business Connectivity Services
This article discusses: (BCS), which is an evolution of the BDC that supports full read/
write access to your LOB data.
• Using external data sources
Why would you want to bring LOB data into SharePoint?
• Building a Word add-in
Consider the use case where you have a customer relationship
• Using the client object model management (CRM) system that only a limited number of people
• Web services as social services in the organization can access directly. However, there’s a customer
Technologies discussed: table in the database with name and address data that could be
Office 2010, SharePoint 2010, Windows Communication Foundation
used by many others if it were available. In real-life, you prob-
ably end up with users copying this information from various
20 msdn magazine
non-authoritative sources and pasting it into
their Office documents. It would be better
to access this customer data from the au-
thoritative CRM system and expose it in
SharePoint as an external list that Office
clients can access.
SharePoint Designer 2010 is the tool
used for configuring access to a LOB
system and making its data available in a
SharePoint external list. There are a couple steps
required to do this.
The first step is to create a new External
Content Type (ECT). The ECT contains
metadata describing the structure of the
back-end data, such as the fields and CRUD
methods that SharePoint will use to interact
with it. Once the ECT has been created, an
external list can be generated from it on
any site within SharePoint. External lists
look and act like any other standard list Figure 1 ECT Configuration for Accessing External CRM Data
in SharePoint, but the external list data is
not stored in SharePoint. Instead, it’s retrieved via the ECT when operations to read-only. In that case, you can simply select the Read
accessed by an end user. List and Read Item operations during configuration. These are the
SharePoint Designer includes default support for connecting to only two operations required to create an ECT.
external data sources including SQL Server, Windows Commu- Once the ECT is created, it’s a simple step to create an external
nication Foundation (WCF) and the Microsoft .NET Framework. list from it. You can do this by creating a new external list from
Therefore, an ECT can be easily created for connecting to any SQL within SharePoint or SharePoint Designer.
Server database table or view, WCF service or Web service. Custom
.NET solutions can be built in Visual Studio 2010 using the new SharePoint Standard Lists
SharePoint 2010 Business Data Connectivity Model project template. Of course, you can employ standard SharePoint lists to display
For the purposes of this article, the SQL Server data source type business data. For example, say your department manages training-
was used to create an ECT for a database table. Then the ECT course content. You maintain two SharePoint lists: Course Cate-
was used to create an External List. Figure 1 shows the resulting gory and Course. These lists contain the course information that
“Customers From CRM” ECT after completing the configuration employees on other teams use to create customer correspondence,
in SharePoint Designer. brochures or advertising campaigns. So the data is maintained by
a small team, but must be readily available for use by many people
Outlook (see Figure 3). If SharePoint Workspace 2010 is installed You can do more advanced queries by appending the follow-
on the client computer, Sync to SharePoint Workspace lets you ing property filter:
synchronize lists and document libraries to the client with a single ?$filter=startswith(propertyname,'value')
click. A local cached copy of the content is then available to the user But an advanced query that’s important here is one that can return
in SharePoint Workspace whether the user is online or offline. When the Courses with their associated CourseCategory data. By appending
the user is in an offline state and modifies a list item or document the following to the site URL, you can retrieve the combined structure
and saves it locally, the list item or document will be synchronized of Course and CourseCategory in a single payload:
with SharePoint automatically when the user is back online again. /_vti_bin.listdata.svc/Course?$expand=Category
You’ll see this implemented in a Word add-in in the next section.
business data. users have a rich authoring experience. For this example, we’ll build
a Word add-in and present this data to the user in a meaningful
way. This application will have a dropdown list for the course
This is a no-code-required solution. Data is made accessible in categories, a listbox that loads with courses corresponding to the
the SharePoint Workspace client application shown in Figure 4. category selection and a button to insert text about the course into
And because full CRUD methods were defined in the ECT, any the Word document.
changes made to the customer data in SharePoint Workspace will In Visual Studio 2010, create a new Office 2010 Word add-in
be updated in the CRM database as well. project in C#.
Because we mapped the CRM database
fields to the Contact Office item type during
ECT configuration, SharePoint can provide
our external list data to Outlook as native
Contact Items. By clicking the Connect to
Outlook button on the ribbon, SharePoint Figure 3 Connect & Export Options in the SharePoint Ribbon
Silverlight Grid
ASP
.NET
Grid
At Infragistics, we make sure our NetAdvantage for
.NET controls make every part of your User Interface
the very best it can be. That’s why we’ve tested and
re-tested to make sure our Data Grids are the very
fastest grids on the market and our Data Charts
outperform any you’ve ever experienced. Use our
controls and not only will you get the fastest load
times, but your apps will always look good too. Fast
and good-looking…that’s a killer app. Try them for
yourself at infragistics.com/wow.
base.OnInitialized(e);
}
Now you need to add a couple events.
Return to the CoursePicker designer and
double-click the button to create the button
click event. Next, click on the ComboBox
and in the properties menu, click the Events
Figure 5 ATOM Services Document tab and double-click the SelectionChanged
WORD PROCESSING
COMPONENTS ( WHAT YOU SEE IS WHAT YOU GET )
CourseItem course = so the task pane for this add-in will just be one in the collection.
(CourseItem)courseListBox.SelectedItem;
Globals.ThisAddIn.Application.Selection.InsertAfter( Figure 7 shows the completed add-in.
String.Format("{0}: {1} \n{2}\n", course.CourseID, With the data appearing in the Office application, you can
course.Name, course.Description));
} take the solution further by adding code that interacts with the
And that’s it—really simple code for accessing the data in Word APIs. For example, you can add code so that when a user
SharePoint via WCF Data Services. selects a course, the information is inserted and formatted in
Now open the ThisAddIn.cs file. This is the main entry point the document. The Office application APIs are rich and allow
for all add-ins for Office. Here you add the code to instantiate you to add more features to your custom solution that can
the task pane: make users even more productive. Next, we’ll see an example
private void ThisAddIn_Startup(
of this with Word content controls connected to a client-side
object sender, System.EventArgs e) { SharePoint object model.
UserControl wpfHost = new UserControl();
ElementHost host = new ElementHost(); Using the Client Object Model
host.Dock = DockStyle.Fill;
host.Child = new CoursePicker();
Using the REST APIs to gain access to the data is one among a few
wpfHost.Controls.Add(host); options available to you. For example, there are also three new APIs
CustomTaskPanes.Add(
wpfHost, "Training Courses").Visible = true;
available for SharePoint 2010 that provide a consistent program-
} ming model across the JavaScript, .NET managed applications and
Silverlight clients. These three client object models interact with
Windows Small Business Server 2008 delivers a range of features and capabilities for small
businesses. Business owners and employees will benefit from built-in antivirus and anti-
spam protection, integration with Microsoft®
Office Live Small Business and Windows SharePoint® Services 3.0, and support for Windows
Mobile® devices. IT managers and technology consultants will appreciate more flexible and
costeffective licensing, a more secure infrastructure, and being able to run Microsoft SQL
Server® 2008 Standard Edition for Small Business and Windows Server 2008 Standard
technologies on a second hardware server (with SBS Premium Edition).
Windows Server 2008 Standard Technologies Includes everything from SBS 2008 Standard, plus:
Microsoft Exchange Server 2007 Standard Edition Windows Server 2008 Standard 4
Windows SharePoint Services 3.0 Microsoft SQL Server 2008 Standard for
Windows Server Update Services 3.0 Small Business 5
Microsoft ForefrontTM Security for Exchange Server 1, 3
Integration with Office Live Small Business 2
Standard
Premium
Go to:
www.neweggbusiness.com/msserver
Follow us on:
©2000-2010 Newegg Inc. All rights reserved. Newegg is not responsible for errors
and reserves the right to cancel orders arising from such errors. All third party logos are the ownership of the respective owner.
v4.0!
N o w
www.amyuni.com
All trademarks are property of their respective owners. © 1999-2009 AMYUNI Technologies. All rights reserved.
Address = cust.FieldValues.ElementAt(5).Value.ToString(),
Microsoft SharePoint search uses an account that usually query processor that serves the queries. In the security trimming
has full read access across the repository to index its contents. So path, custom query trimming follows out-of-box security trimming.
it’s important that when a user queries for some content, he should So the number of query results after custom trimming must be
be restricted to view only the documents he has permission to see. equal to or less than the number of documents recalled before
SharePoint uses the access control list (ACL) associated with each registering the custom security trimmer (CST) assembly.
document to trim out query results that users have no permission Before delving into the CST architecture, we’ll provide a quick view
to view, but the default trimming provided by SharePoint (out-of- of SharePoint search and the new claims authentication infrastructure.
box trimming) may not always be adequate to meet data security
needs. In that case, you may want to further trim the results SharePoint Search Overview
depending on an organization’s authentication structure. At a high level, the search system can be divided into two discrete
This is where the SharePoint custom security trimming infra- parts: the gatherer pipeline and the query processor pipeline.
structure is useful. SharePoint lets you implement business logic Gatherer Pipeline This part is responsible for crawling and
in a separate module and then integrate it into the workflow of the indexing content from various repositories, such as SharePoint
sites, HTTP sites, file shares, Lotus Notes, Exchange Server and so
on. This component lives inside MSSearch.exe. When a request is
This article discusses:
issued to crawl a repository, the gatherer invokes a filter daemon,
• Claims authentication in SharePoint 2010 MssDmn.exe, to load the required protocol handlers and filters
• Deploying a custom security trimmer necessary to connect, fetch and parse the content. Figure 1 repre-
• Using PowerShell cmdlets sents a simplified view of the gatherer pipeline.
• Troubleshooting SharePoint can only crawl using a Windows NTLM authentication
account. Your content source must authorize the Windows account
Technologies discussed:
sent as part of the crawl request in order to access the document
SharePoint, custom security trimmer content. Though claims authentication is supported in SharePoint
Code download available at: 2010, the gatherer is still not a claims-aware application and will
code.msdn.microsoft.com/mag201007Search
not access a content source that has claims authentication only.
Query Processor Pipeline In SharePoint 2010, two of the
36 msdn magazine
most important changes in the query processor pipeline are in
CONTENT
its topological scalability and authentication model. In Microsoft REPOSITORIES
SharePoint Server (MOSS) 2007, the query processor (search query and
site settings service, referred to as search query service from here on) MSSearch.exe FILTER File Share
runs in the same process as Web front end (WFE), but in SharePoint DAEMON –
Gatherer SharePoint
2010 it can run anywhere in the farm—and it also runs as a Web service. MssDmn.exe
Sites
The WFE talks to the search query service through Windows Protocol Handlers
Communication Foundation (WCF) calls. The search query service & Filters HTTP Sites
is now completely built on top of the SharePoint claims authentica-
CATALOG/ Exchange
tion infrastructure. This decouples SharePoint search from its tight Server
INDEX
integration with Windows authentication and forms authentication.
As a result, SharePoint now supports various authentication models.
Figure 1 A Simplifed View of the SharePoint Gatherer Pipeline
The search query service trims the search results according to the
rights of the user who issues the query. Custom security trimmers CST associated with any of the URLs in the search results, it calls
are called by the search query service after out-of-box trimming into that trimmer. Trimmers are loaded into the same IIS worker
has completed. See Figure 2 for the various components involved process, w3wp.exe, in which the search query service is running.
when a query is performed. Once the trimmer is loaded, the search query service calls into
Custom security trimming is part of the query pipeline, so we’ll the CheckAccess method implemented inside the trimmer with
limit this discussion to components of the query pipeline. an out-of-box trimming result set associated with the crawl rule that
you defined earlier. The CheckAccess method decides whether a
Claims Authentication in SharePoint 2010 specific URL should be included in the final result set sent back to
A basic understanding of claims authentication support in the user. This is done by returning a bit array. Setting a bit inside
SharePoint 2010 is required to implement custom trimming logic this array to either true or false will “include” or “block” the URL
inside a CST assembly. In the claims authenticated environment, from the final result set. In case you want to stop processing the
the user identity is maintained inside an envelope called a security URLs due to performance or some unexpected reason, you must
token. It contains a collection of identity assertions or claims about throw a PluggableAccessCheckException. If you throw after
the user. Examples of claims are username, e-mail address, phone processing a partial list of URLs, the processed results are sent
number, role and so on. Each claim will have various attributes back to the user. The search query service will remove all the
such as type and value. For example, in a claim the UserLogonName unprocessed URLs from the final result set.
may be the type and the name of the user who is currently logged
in may be the value.
Security tokens are issued by an entity called a security token
service (STS). This is a Web service that responds to user authen- SharePoint can only
tication requests. Once the user is authenticated, STS sends back
a security token with all the user rights. STS can be configured crawl using a Windows NTLM
either to live inside the same SharePoint farm or act as a relying party
to another STS that lives outsides the farm: Identity Provider-STS authentication account.
(IP-STS) and Relying Party-STS (RP-STS), respectively. Whether
you want to use IP-STS or RP-STS has to be carefully considered
while designing SharePoint deployment. Steps Involved in Deploying a
SharePoint uses the default claims provider shipped with Custom Security Trimmer
the product in a simple installation. Even if you set up the farm In a nutshell, there are five steps involved in the successful deploy-
completely using Windows authentication, when a query is issued, ment of a CST:
a search service application proxy will talk to STS to extract all the 1. Implement ISecurityTrimmer2 interface.
claims of the user in a security token. This token is then passed to a. Implement Initialize and CheckAccess methods
the search query service through a WCF call. using managed code
b. Create an assembly signing file and include it as
Workflow of Custom Security Trimming part of the project
The workflow logic of a CST can be represented in a simple c. Build the assembly
flowchart as shown in Figure 3. 2. Deploy the trimmer into the Global Assembly Cache (GAC)
As stated earlier, the search query service first performs out-of- of all the machines where a search query service is running.
box security trimming and then looks for the presence of any CSTs 3. Create a crawl rule for the content sources that you want to custom
associated with the search results. The association of a particular trim. You can do this from the Search Administration site.
content source with a CST is done by defining a crawl rule for 4. Register the trimmer with the crawl rule using the Windows
that specific content source. If the search query service finds any PowerShell cmdlet New-SPEnterpriseSearchSecurityTrimmer.
msdnmagazine.com July 2010 37
6.
1. WFE – w3wp.exe WCF w3wp.exe
Query – Call OOB
HTTP with 10.
Security
Get User Trimmer
Request 2. 3. Claims 7.
BROWSER Search Query Search Search Search 11. OOB Trimmed
Center Object Service Service Query Results
SEARCH CENTER Web Part Model App Proxy App Service 12.
18. 17. 16. 15. 14.
Custom
13. Final Security
Search Trimmer(s)
Results
4. Get Claims Token for the User 5. Security Token 8. Fetch Results 9. Untrimmed Search Results
STS Index
Microsoft. Services
SharePoint.dll MSSearch.exe
w3wp.exe
Figure 2 Workflow of a Query Originating from the Search Center in a SharePoint Site
5. Perform a full crawl of the content sources associated with the process. Here's the signature of this method and a description
crawl rules that you created in step 3. A full crawl is required of how it works:
to properly update all of the related database tables. An incre- void Initialize(NameValueCollection staticProperties,
SearchServiceApplication searchApplication);
mental crawl will not update the appropriate tables.
staticProperties–The trimmer registration Windows PowerShell
Implementing the Custom Security cmdlet, New-SPEnterpriseSearchSecurityTrimmer, takes a param-
Trimmer Interface eter called “properties” (in MOSS 2007 this was called “configprops”)
through which you can pass named value pairs separated by ~. This
MOSS 2007 and Microsoft Search Server (MSS) 2008 sup-
may be useful to initialize your trimmer class properties.
ported custom security trimming of search results through the
For example: When passing “superadmin~foouser~poweruser~
interface ISecurityTrimmer. This interface has two methods, Ini-
baruser” to the New-SPEnterpriseSearchSecurityTrimmer cmdlet,
tialize and CheckAccess. Because of the architectural changes in
the NameValueCollection parameter will have two items in the
SharePoint and the search system in the 2010 versions, both of these
collection with keys as “superadmin” and ”poweruser” and values
methods won’t work as they did in MOSS 2007. They need to be
as “foouser” and “baruser,” respectively.
re-implemented using the ISecurityTrimmer2 interface. As a
searchApplication–If your trimmer requires a deeper knowl-
result, if you try to register a MOSS 2007 trimmer in SharePoint
edge about the search service instance and the SharePoint farm,
2010, it will fail, saying ISecurityTrimmer2 is not implemented.
use a searchApplication object to determine that information.
Other changes from MOSS 2007 include:
To learn more about the SearchServiceApplication class, refer to
Changes in the Initialize Method In MOSS 2007, one of the
msdn.microsoft.com/library/ee573121(v=office.14).
parameters passed was the SearchContext object. SearchContext was
ISecurityTrimmer2::CheckAccess Method This implements
the entry point into the search system and it provided the search con-
all the trimming logic. Pay special attention to two aspects in this
text for the site or search service provider (SSP). This class has been
method: the identity of the user who issued the query, and the
deprecated in 2010. Instead, use the SearchServiceApplication class:
void Initialize(NameValueCollection staticProperties,
performance latency caused by a large returned query set.
SearchServiceApplication searchApplication); Following are the signature of this method and a description
Changes in the CheckAccess Method In both MOSS 2007 of how it works:
and SharePoint 2010, the search query service calls into the CST public BitArray CheckAccess(IList<String>documentCrawlUrls,
IDictionary<String, Object>sessionProperties, IIdentitypassedUserIdentity)
assemblies. In MOSS 2007, the CheckAccess method took only
documentCrawlUrls–The collection of URLs to be security
two parameters, but in SharePoint 2010, the search query service
trimmed by this trimmer.
passes the user identity into CheckAccess using a third parameter
sessionProperties–A single query instance is treated as one session.
of type IIdentity:
public BitArray CheckAccess(IList<String>documentCrawlUrls,
If your query fetches many results, the CheckAccess method is
IDictionary<String, Object>sessionProperties, IIdentity passedUserIdentity) called multiple times. You can use this parameter to share values
ISecurityTrimmer2::Initialize Method This method is called or to keep track of the URLs processed between these calls.
the first time a trimmer is loaded into the search query service IIS passedUserIdentity–This is the identity of the user who issued the query.
worker process. The assembly will live for the duration of the worker It’s the identity by which the code will allow or deny access to content.
38 msdn magazine SharePoint Security
/update/2010/07
www.componentsource.com
BEST
BESTSELLER
SELLER LEADTOOLS Recognition SDK from $3,595.50
Add robust 32/64 bit document imaging & recognition functionality into your applications.
t'FBUVSFTBDDVSBUF
IJHITQFFENVMUJUISFBEFE0$3BOEGPSNTSFDPHOJUJPO
t4VQQPSUTUFYU
0.3
JNBHF
BOE%%CBSDPEFöFMET
t"VUPSFHJTUSBUJPOBOEDMFBOVQUPJNQSPWFSFDPHOJUJPOSFTVMUT
t*ODMVEFT/&5
$$
81'
8'
8$'BOE4JMWFSMJHIUJOUFSGBDFT
t*ODMVEFTDPNQSFIFOTJWFDPOöEFODFSFQPSUTUPBTTFTTQFSGPSNBODF
BESTSELLER
BEST SELLER TX Text Control .NET and .NET Server from $499.59
Word processing components for Visual Studio .NET.
t"EEQSPGFTTJPOBMXPSEQSPDFTTJOHUPZPVSBQQMJDBUJPOT
t3PZBMUZGSFF8JOEPXT'PSNTUFYUCPY
t5SVF8:4*8:(
OFTUFEUBCMFT
UFYUGSBNFT
IFBEFSTBOEGPPUFST
JNBHFT
CVMMFUT
TUSVDUVSFEOVNCFSFEMJTUT
[PPN
EJBMPHCPYFT
TFDUJPOCSFBLT
QBHFDPMVNOT
t-PBE
TBWFBOEFEJU%0$9
%0$
1%'
1%'"
35'
)5.-
595BOE9.-
Microsoft Office OneNote is a powerful digital notebook (and, soon, Windows Phone 7). Further, OneNote was previously
for collecting, organizing, searching and sharing information. With included only in some Office editions, but it’s now in every
the recent release of Microsoft Office 2010, not only is the OneNote edition of Office 2010. All of these factors create a more compelling op-
user experience improved, but OneNote notebooks are now more portunity than ever before to integrate OneNote into information
universally available. Users can synchronize content among com- management solutions.
puters via Windows Live; search, edit and share notes from any In this article, I’ll provide an overview of developing applications
Web browser; and access full notebooks from Windows Mobile that interoperate with data from Microsoft OneNote 2010 and 2007.
In the process, I’ll introduce the OneNote Object Model project
that is freely available on CodePlex and demonstrate how this
The OneNote Object Model library on CodePlex, to which
this article refers, had not been updated for compatibility with
library makes it easy to integrate information from OneNote
OneNote 2010 at the time of this writing. notebooks, sections and pages into client applications.
This article discusses:
The Evolution of OneNote Development
• The evolution of OneNote development The initial release of OneNote 2003 didn’t provide an API to external
• Accessing OneNote data using the COM API applications. Shortly thereafter, however, OneNote 2003 SP 1 added
• Retrieving and updating page content using the COM API a COM library, called the OneNote 1.1 Type Library, which enabled
• The OneNote Object Model Library programmatic import of images, ink and HTML into OneNote
via a simple class called CSimpleImporter. Notably, however, this
• Data binding with the OneNote Object Model Library
class only provided data import capabilities; you could use it to
Technologies discussed: push data into OneNote notebooks, but there was no way to get
OneNote 2010, OneNote 2007, Visual Studio 2010, LINQ, content back out programmatically.
OneNote Object Model, XAML Data Binding, Windows The release of OneNote 2007 brought much more powerful
Presentation Foundation, C# development capabilities with a new COM API that provides
Code download available at: the ability to import, export and modify OneNote 2007 content
code.msdn.microsoft.com/mag201007OneNote
programmatically. The OneNote Application class in that library
provides a rich collection of methods for working with:
44 msdn magazine
• Notebook structure: discovering, opening, modifying, Figure 1 Enumerating Notebooks
closing and deleting notebooks, section groups and sections using System;
• Page content: discovering, opening, modifying, saving using System.Linq;
using System.Xml.Linq;
and deleting page content using Microsoft.Office.Interop.OneNote;
• Navigation: finding, linking to and navigating to pages
class Program
and objects {
Most of these methods return or accept XML documents that static void Main(string[] args)
{
represent both notebook structure and page content. Saul Candib var onenoteApp = new Application();
wrote a two-part series, “What’s New for Developers in OneNote
string notebookXml;
2007,” that documents this API at msdn.microsoft.com/library/ onenoteApp.GetHierarchy(null, HierarchyScope.hsNotebooks, out notebookXml);
ms788684(v=office.12), and the XML schema is detailed at msdn.microsoft.com/
var doc = XDocument.Parse(notebookXml);
library/aa286798(office.12). var ns = doc.Root.Name.Namespace;
The XML schema for OneNote 2010 is substantially similar to that foreach (var notebookNode in
from node in doc.Descendants(ns + "Notebook") select node)
in OneNote 2007. OneNote 2010 introduces a file format change {
to support some of its new features (such as linked note-taking, Console.WriteLine(notebookNode.Attribute("name").Value);
}
versioning, Web sharing, multilevel subpages and equation sup- }
port). However, OneNote 2010 can continue to work on One- }
Note 2007 notebooks without changing the file format. In
OneNote 2010, retrieving data from sections stored in the the OneNote API. The code in Figure 1 uses the GetHierarchy
OneNote 2007 file format will yield XML documents simi- method to retrieve an XML document containing a list of One-
lar to those in OneNote 2007. The primary differences in the Note notebooks, then uses LINQ to XML to extract and print
XML schema for OneNote 2010 sections are additive changes the notebook names to the console.
to support the new features listed earlier. A new XMLSchema The HierarchyScope enumeration, passed as the second pa-
enumeration is available to represent the OneNote schema version; rameter to the GetHierarchy method, specifies the depth of the
many of the OneNote methods have new overloads that take an notebook structure to retrieve. To retrieve sections in addition to
XMLSchema parameter to indicate the schema version desired. the notebooks, simply update this enumeration value to Hierarchy-
Note that the CSimpleImporter class, introduced in OneNote Scope.hsSections and process the additional XML child nodes, as
2003 and still available in OneNote 2007, has been removed from demonstrated in Figure 2.
OneNote 2010, so applications that use this class need to be rewritten
to use the new interfaces in order to work with OneNote 2010. Retrieving and Updating Page Content
The GetPageContent method will return an XML document
Accessing OneNote Data Using the COM API containing all of the content on a specified page. The page to
It’s fairly straightforward to start using the OneNote COM API to retrieve is specified using a OneNote object ID, a string-based
access live data from OneNote notebooks. Start by creating a new
console application in Visual Studio and then add a reference to Figure 2 Enumerating Sections
the Microsoft OneNote 14.0 Type Library COM component (for
using System;
OneNote 2010) or the Microsoft OneNote 12.0 Type Library COM using System.Linq;
component (for OneNote 2007). using System.Xml.Linq;
using Microsoft.Office.Interop.OneNote;
If you’re using Visual Studio 2010 to develop OneNote 2010
applications, take note of a couple minor compatibility issues. First, class Program
{
due to a mismatch of the OneNote interop assembly that shipped static void Main(string[] args)
with Visual Studio 2010, you should not directly reference the {
var onenoteApp = new Application();
Microsoft.Office.Interop.OneNote component on the .NET tab of the
Add Reference dialog, but instead reference the Microsoft OneNote string notebookXml;
onenoteApp.GetHierarchy(null, HierarchyScope.hsSections, out notebookXml);
14.0 Type Library component on the COM tab. This still results in the
addition of a OneNote interop assembly to your project’s references. var doc = XDocument.Parse(notebookXml);
var ns = doc.Root.Name.Namespace;
Second, the OneNote 14.0 Type Library is not compatible with foreach (var notebookNode in from node in doc.Descendants(ns +
the Visual Studio 2010 “NOPIA” feature (in which primary interop "Notebook") select node)
{
assemblies are not embedded in the application by default). There- Console.WriteLine(notebookNode.Attribute("name").Value);
fore, make sure to set the Embed Interop Types property to False foreach (var sectionNode in from node in
notebookNode.Descendants(ns + "Section") select node)
for the OneNote interop assembly reference. (Both of these {
issues are described in more detail on OneNote Program Manager Console.WriteLine(" " + sectionNode.Attribute("name").Value);
}
Daniel Escapa’s blog at blogs.msdn.com/descapa/archive/2010/04/27/ }
onenote-2010-and-visual-studio-2010-compatibility-issues.aspx.) With the }
}
OneNote library reference in place, you’re ready to make calls to
msdnmagazine.com July 2010 45
Figure 3 Getting Page Content Note Object Model comes in. It’s a managed code library that
using System;
provides object-oriented abstractions over the COM-based One-
using System.Linq; Note API. The library is open source and licensed under the Microsoft
using System.Xml.Linq;
using Microsoft.Office.Interop.OneNote;
Public License (Ms-PL).
The OneNote Object Model is available for download on Code-
class Program
{
Plex at onom.codeplex.com. The library was designed for OneNote
static void Main(string[] args) 2007, and by the time you read this, the release downloads should
{
var onenoteApp = new Application();
be updated to provide compatibility with OneNote 2010. If not,
you can still use it with OneNote 2007 sections in OneNote
string notebookXml;
onenoteApp.GetHierarchy(null, HierarchyScope.hsPages, out notebookXml);
2010 by downloading the source code, removing the existing Micro-
soft.Office.Interop.OneNote assembly reference in the OneNote-
var doc = XDocument.Parse(notebookXml);
var ns = doc.Root.Name.Namespace;
Core project and adding a reference to the Microsoft OneNote 14.0
var pageNode = doc.Descendants(ns + "Page").Where(n => Type Library as shown previously.
n.Attribute("name").Value == "Test page").FirstOrDefault();
if (pageNode != null)
In addition to some unit test projects and sample code, the
{ solution contains two class library projects: OneNoteCore and
string pageXml;
onenoteApp.GetPageContent(pageNode.Attribute("ID").Value, out pageXml);
OneNoteFramework. The OneNoteCore library is the low-level
Console.WriteLine(XDocument.Parse(pageXml)); bridge between the OneNote COM API and familiar Microsoft
}
}
.NET Framework metaphors; it exposes real return values instead
} of COM out parameters, converts COM error codes into .NET
exceptions, exposes a OneNoteObjectId struct and XDocument
unique identifier for each object in the OneNote notebook instances instead of raw strings, and more. Studying this code can
hierarchy. This object ID is included as an attribute on the XML help you understand how the OneNote API works, but in most cases
nodes returned by the GetHierarchy method. you won’t need to interact with the OneNoteCore library directly.
Figure 3 builds on the previous examples by using the Get- The OneNoteFramework library provides higher-level
Hierarchy method to retrieve the OneNote notebook hierarchy abstractions of OneNote concepts. Here you’ll find classes with
down to page scope. It then uses LINQ to XML to select the node intuitive names like OneNoteNotebook, OneNoteSection and
for the page named “Test page” and pass that page’s object ID to OneNotePage. The primary entry point for interacting with the
the GetPageContent method. The XML document representing OneNote hierarchy structure is a class called OneNoteHierarchy,
the page content is then printed to the console. which contains a static member called Current. By adding an
The UpdatePageContent method can be used to make changes to
a page. The page content is specified by the same XML document Figure 4 Updating Page Content
schema that the code in Figure 3 retrieved; it can contain various using System;
content elements that define text outlines, inserted files, images, using System.Linq;
using System.Xml.Linq;
ink, and audio or video files. using Microsoft.Office.Interop.OneNote;
The UpdatePageContent method treats the elements in the
class Program
provided XML document as a collection of content that may have {
changed, matching specified content to existing content via its static void Main(string[] args)
{
OneNote object ID. You can therefore make changes to existing var onenoteApp = new Application();
content by calling the GetPageContent method, making the
string notebookXml;
desired changes to the XML returned, then passing that XML onenoteApp.GetHierarchy(null, HierarchyScope.hsPages, out notebookXml);
back to the UpdatePageContent method. You can also specify new
var doc = XDocument.Parse(notebookXml);
content elements to be added to the page. var ns = doc.Root.Name.Namespace;
To illustrate this, Figure 4 adds a date stamp to the bottom of var pageNode = doc.Descendants(ns + "Page").Where(n =>
our test page. It uses the approach shown in Figure 3 to determine n.Attribute("name").Value == "Test page").FirstOrDefault();
var existingPageId = pageNode.Attribute("ID").Value;
the OneNote object ID of the page, and then uses the XDocument
if (pageNode != null)
and XElement classes in System.Xml.Linq to construct an XML {
document containing the new content. Because the Page object var page = new XDocument(new XElement(ns + "Page",
new XElement(ns + "Outline",
ID specified in the document matches the object ID of an existing new XElement(ns + "OEChildren",
page, the UpdatePageContent method will append the new new XElement(ns + "OE",
new XElement(ns + "T",
content to the existing page. new XCData("Current date: " +
DateTime.Now.
ToLongDateString())))))));
The OneNote Object Model Library page.Root.SetAttributeValue("ID", existingPageId);
onenoteApp.UpdatePageContent(page.ToString(), DateTime.MinValue);
It isn’t particularly difficult to interact with OneNote data in this }
way, but it’s a bit awkward to parse and construct XML docu- }
}
ments just to perform basic data operations. That’s where the One-
46 msdn magazine OneNote 2010
SNMP
SMTP SFTP
FTP
2IP
POP HTTP
ENTERPRISE
TELNET WEB UDP
UI SSH
SSL EMULATION TCP
PowerSNMP for ActiveX and .NET PowerTCP for ActiveX and .NET
Create custom Manager, Agent and Trap applications with a set Add high performance Internet connectivity to your ActiveX, .NET
of native ActiveX, .NET and Compact Framework components. and Compact Framework projects. Reduce integration costs with
SNMPv1, SNMPv2, SNMPv3 (authentication/encryption) and detailed documentation, hundreds of samples and an expert
ASN.1 standards supported. in-house support staff.
}
}
Simplified Data Access
As you might expect, the OneNoteNotebook class has a property Wrapping up, the OneNote Object Model library substantially
called Sections. Therefore, you can enumerate the section names simplifies access to data in Microsoft OneNote notebooks, expos-
(Figure 2) simply as follows: ing rich object collections that can be queried and manipulated
using Microsoft.Office.OneNote; with LINQ expressions and WPF data binding. A follow-up ar-
ticle will extend these concepts to explore working with OneNote
class Program
{ notebooks in Silverlight and Windows Phone applications, and
static void Main(string[] args) accessing OneNote data in the cloud.
{
foreach (var notebook in OneNoteHierarchy.Current.Notebooks)
{
System.Console.WriteLine(notebook.Name); ANDY GRAY is a partner and technology director of Five Talent Software, helping
foreach (var section in notebook.Sections)
nonprofit organizations operate more effectively through strategic technology
{
System.Console.WriteLine(" " + section.Name); solutions. He writes about OneNote development at onenotedev.com.
}
}
} THANKS to the following technical experts for reviewing this article:
} Michael Gerfen and John Guin
48 msdn magazine OneNote 2010
Untitled-1 1 1/11/10 10:55 AM
OFFICE SERVICES
Merging Word
Documents on the
Server Side with
SharePoint 2010
Ankush Bhatia and Manvir Singh
Business application developers must often create that provide a central point where all of this repetitive work can
solutions that automate day-to-day activities for their organizations. be addressed for multiple users without any human intervention.
These activities typically involve processing and manipulating data Although moving solutions that complete repetitive Office tasks
in various documents—for example, extracting and consolidating from the desktop to a server seems straightforward, it’s not quite
data from multiple source documents, merging data into e-mail as simple as it sounds.
messages, searching and replacing content in documents, recalcu- Microsoft designed the Office application suite for desktop
lating data in workbooks, extracting images from presentations ... computer scenarios where a user is logged on to a machine and
and the list goes on and on. is sitting in front of it. For reasons of security, performance and
Microsoft Office makes these kinds of repetitive tasks simpler by reliability, Office applications are not the right tools for server-side
providing a rich API that developers can use to automate them. Because scenarios. Office applications in a server environment may require
such solutions work seamlessly for normal desktop users, developers manual intervention, and that’s not optimal for a server-side
have taken them to the next level: deploying the solutions to servers solution. Microsoft recommends avoiding this kind of solution,
as explained in the Microsoft Support article, “Considerations for
server-side Automation of Office” (support.microsoft.com/kb/257757).
This article discusses:
Since the release of Office 2007, however, the Office automation
• The status report template story has changed a great deal. With Office 2007 Microsoft introduced
• Creating a SharePoint document library Office OpenXML and Excel Services for developers who would
• Building the Web Part like to develop Office-based solutions on the server.
• Merging the reports With Office 2010 and SharePoint 2010, Microsoft has come up
with a new set of components called Application Services. These
Technologies discussed:
put a rich set of tools in a developer’s bag for Office automation
Office 2010, SharePoint 2010 solutions. Application Services include Excel Services, Word
Code download available at: Automation Services, InfoPath Forms Services, PerformancePoint
code.msdn.microsoft.com/mag201007DocMerge
Services and Visio Services. You can learn more about the details
of these services at msdn.microsoft.com/library/ee559367(v=office.14).
50 msdn magazine
Project Managers SharePoint Stores Group Manager
Individual
Status Reports Group Manager Requests
Consolidated Status Report
Upload Weekly Status Reports for All Projects
In this article, we will show you how to use Office OpenXML, 1. Reads all of the individual status report documents.
Word Automation Services and SharePoint to build a simple appli- 2. Merges them into a single report.
cation that merges separate status reports into a single document. 3. Saves the report in the repository for users to access.
Building a Template
To implement this solution, the
first step is to provide a common
template to all the project manag-
ers for filling out the weekly status
reports. When they finish filling
in the data, they’ll upload the
reports to a SharePoint repository.
On Monday morning, the Group
Manager can then log into the
SharePoint site and fire up the logic
that performs the following tasks: Figure 2 Weekly Status Report Template
click Site content types. Click Create and then type a name for the Also add an event handler for the button click:
void OnSubmitClick(object sender, EventArgs e) {
content type (we used Weekly Status Report). // TODO : Put code to merge documents here
In the Select Parent Content Type From list, select Document }
Content Types. In the Parent Content type list, select Document At this point you can build and deploy your project. We will
and click OK. add the implementation to our OnSubmitClick handler a bit later
Under Settings, select Advanced Settings, then choose the in this article.
“Upload a new document template” radio button and click The next step is to add the Web Part to the document library.
Browse. Find the report template (WeeklyStatusReport.dotx) and In SharePoint Designer 2010, open the SharePoint site. Click All
upload it to the library. Files | WSR Library | Forms, then click on AllItems.aspx to edit it.
52 msdn magazine Office Services
Fast
Data
Char
ts
Geosp
atial
Maps
Silverlight
Pivot
Grids
The next step is to implement the logic for saving the merged
name of the Web Part you just created and deployed), and click OK document to the library as a new document. This requires a bit of
(see Figure 4). Figure 5 shows the page with the Web Part in place. effort, but you can make it easier by using the SharePoint Managed
Save the page and close SharePoint Designer. Client Object Model. You’ll need to add two references to the
)($785('7(&+12/2*,(6
1(7WR&/LQT 64/6HUYHUWR
$631(7:HE)RUPV09& 2UDFOH'DWDEDVHWR
6LOYHUOLJKWWR 2IILFHWR
:3):LQ)RUPV 6KDUH3RLQWWR
:&)$60; 9LVXDO6WXGLRWR
&86720(5$33529('02'(/'5,9(162)7:$5()$&725<
:HXQGHUVWDQGWKHFKDOOHQJHVWKDWFRPHZLWKWRGD\¶VDQGWRPRUURZ¶VWHFKQRORJ\LQWHJUDWLRQDQGHYROXWLRQ
&RGH)OXHQW(QWLWLHVLVDIXOO\LQWHJUDWHG0RGHO'ULYHQ6RIWZDUH)DFWRU\ZKLFKSURYLGHVDUFKLWHFWVDQGGHYHORSHUVDVWUXFWXUHG
PHWKRGDQGWKHFRUUHVSRQGLQJWRROVWRGHYHORS1(7DSSOLFDWLRQVEDVHGRQDQ\W\SHRIDUFKLWHFWXUHIURPDQHYHUFKDQJLQJ
EXVLQHVVPRGHODQGUXOHVDWDQXQSUHFHGHQWHGSURGXFWLYLW\OHYHO
&RGH)OXHQW(QWLWLHVLVEDVHGRQDSOXJJDEOHSURGXFHUORJLFZKLFKIURPDGHFODUDWLYHPRGHO\RXGHVLJQHGFRQWLQXRXVO\JHQHUDWHV
UHDG\WRXVHVWDWHRIWKHDUWVFDODEOHKLJKSHUIRUPDQFHDQGHDVLO\GHEXJJDEOHVRXUFHFRGHFRPSRQHQWVDQGHYHU\WKLQJ\RXQHHG
'RZQORDG\RXU)5((WULDOWRGD\DW
ZZZ&RGH)OXHQW(QWLWLHVFRP0VGQ
&RQWDFWLQIR#VRIWIOXHQWFRP
ZZZ&RGH)OXHQW(QWLWLHVFRP
Building Distributed
Apps with NHibernate
and Rhino Service Bus
Oren Eini
For a long time, I dealt almost exclusively in Web applications. bandwidth limitations of data access over the intranet or Internet.
When I moved over to build a smart client application, at first I was You’re also responsible for synchronizing data between front-end
at quite a loss as to how to approach building such an application. and back-end systems, distributed change-tracking, and handling
How do I handle data access? How do I communicate between the the issues of working in an occasionally connected environment.
smart client application and the server? A smart client application, as discussed in this article, can be
Furthermore, I already had a deep investment in an existing toolset built with either Windows Presentation Foundation (WPF) or
that drastically reduced the time and cost for development, and I Silverlight. Because Silverlight exposes a subset of WPF features,
really wanted to be able to continue using those tools. It took me the techniques and approaches I outline here are applicable to both.
a while to figure out the details to my satisfaction, and during that In this article, I start the processes of planning and building
time, I kept thinking how much simpler a Web app would be—if a smart client application using NHibernate for data access and
only because I knew how to handle such apps already. Rhino Service Bus for reliable communication with the server.
There are advantages and disadvantages to smart client applica- The application will function as the front end for an online lending
tions. On the plus side, smart clients are responsive and promote library, which I called Alexandra. The application itself is split into
interactivity with the user. You also reduce server load by moving two major pieces. First, there’s an application server running a set
processing to a client machine, and enable users to work even while of services (where most of the business logic will reside), accessing
disconnected from back-end systems. the database using NHibernate. Second, the smart client UI will
On the other hand, there are the challenges inherent in such make exposing those services to the user easy.
smart clients, including contending with the speed, security, and NHibernate (nhforge.org) is an object-relational mapping (O/RM)
framework designed to make it as easy to work with relational
This article discusses: databases as it is to work with in-memory data. Rhino Service
Bus (github.com/rhino-esb/rhino-esb) is an open source service bus
• Distribution of responsibilities
implementation built on the Microsoft .NET Framework, focusing
• Fallacies of distributed computing
primarily on ease of development, deployment and use.
• Queues and disconnected operation
• Session and transaction management Distribution of Responsibilities
Technologies discussed: The first task in building the lending library is to decide on the
NHibernate, Rhino Service Bus
proper distribution of responsibility between the front-end and
back-end systems. One path is to focus the application primar-
58 msdn magazine
ily on the UI so that most of the processing is done on the client Smart Client
machine. In this case the back end serves mostly as a data repository.
In essence, this is just a repetition of the traditional client/server
Application Server
application, with the back end serving as a mere proxy for the data NHibernate &
Database
store. This is a valid design choice if the back-end system is just a Rhino Service Bus
data repository. A personal book catalog, for example, might benefit
from such architecture, because the behavior of the application is
limited to managing data for the users, with no manipulation of
Figure 1 The Application’s Architecture
the data on the server side.
For such applications, I recommend making use of WCF RIA Services Considering the fact that the main source for remote calls in
orWCFDataServices.If you want the back-end server to expose a CRUD most Web applications is a database or another application server
interface for the outside world, then leveraging WCF RIA Services located in the same datacenter (and often in the same rack), this is
or WCF Data Services allows you to drastically cut down the time a drastic change with several implications.
required to build the application. But while both technologies Intranet and Internet connections suffer from issues of speed,
let you add your own business logic to the CRUD interface, any bandwidth limitations and security. The vast difference in the costs
attempt to implement significant application behavior using this of communication dictate a different communication structure
approach would likely result in an unmaintainable, brittle mess. than the one you’d adopt if all the major pieces in the application
I won’t cover building such an application in this article, but Brad were residing in the same datacenter.
Adams has shown a step-by-step approach for building just such Among the biggest hurdles you have to deal with in distributed
an application using NHibernate and WCF RIA Services on his applications are the fallacies of distributed computing. These are
blog at blogs.msdn.com/brada/archive/2009/08/06/business-apps-example-for- a set of assumptions that developers tend to make when building
silverlight-3-rtm-and-net-ria-services-july-update-part-nhibernate.aspx. distributed applications, which ultimately prove false. Relying
Going all the way to the other extreme, you can choose to on these false assumptions usually results in reduced capabilities
implement most of the application behavior on the back end, or a very high cost to redesign and rebuild the system. There are
leaving the front end with purely presentation concerns. While eight fallacies:
this seems reasonable at first, because this is how you typically write • The network is reliable.
Web-based applications, it means that you can’t take advantage of • Latency is zero.
running a real application on the client side. State management • Bandwidth is infinite.
would be harder. Essentially you’re back writing a Web application, • The network is secure.
with all the complexities this entails. You won’t be able to shift pro- • Topology doesn’t change.
cessing to the client machine and you won’t be able to handle in- • There is one administrator.
terruptions in connectivity. • Transport cost is zero.
Worse, from the user perspective, this approach means that you • The network is homogeneous.
present a more sluggish UI since all actions require a roundtrip to Any distributed application that doesn’t take these fallacies into
the server. account is going to run into sever problems. A smart client appli-
I’m sure it won’t surprise you that the approach I’m taking in this cation needs to deal with those issues head on. The use of caching
example is somewhere in the middle. I’m going to take advantage is a topic of great importance in such circumstances. Even if you
of the possibilities offered by running on the client machine, but aren’t interested in working in a disconnected fashion, a cache is
at the same time significant parts of the application run as services almost always useful for increasing application responsiveness.
on the back end, as shown in Figure 1.
The sample solution is composed of three projects, which you can
download from github.com/ayende/alexandria. Alexandria.Backend is a
console application that hosts the back-end code. Alexandria.Client
Intranet and Internet
contains the front-end code, and Alexandria.Messages contains the
message definitions shared between them. To run the sample, both
connections suffer from issues of
Alexandria.Backend and Alexandria.Client need to be running.
One advantage of hosting the back end in a console application
speed, bandwidth and security.
is that it allows you to easily simulate disconnected scenarios by
simply shutting down the back-end console application and starting Another aspect you need to consider is the communication
it up at a later time. model for the application. It may seem that the simplest model
is a standard service proxy that allows you to perform remote
Fallacies of Distributed Computing procedure calls (RPCs), but this tends to cause problems down
With the architectural basics in hand, let’s take a look at the impli- the road. It leads to more-complex code to handle a disconnected
cations of writing a smart client application. Communication with state and requires you to explicitly handle asynchronous calls if
the back end is going to be through an intranet or the Internet. you want to avoid blocking in the UI thread.
msdnmagazine.com July 2010 59
Single Request local cache because it is acceptable to show the user
User Interface
cached data while the application requests fresh data
MyBooks
Query from the back-end system. Other applications—stock
trading, for example—should probably show nothing
MyQueue
Query
at all rather than stale data.
Recommendations
Query
Disconnected Operations
The next problem you have to face is handling dis-
Local Cache Subscription connected scenarios. In many applications, you can
Details Query
specify that a connection is mandatory, which means
you can simply show the user an error if the back-end
Figure 2 A Single Request to the Server Contains Several Messages servers are unavailable. But one benefit of a smart client
application is that it can work in a disconnected
Back-End Basics manner, and the Alexandria application takes full advantage of that.
Next, there’s the problem of how to structure the back end of the However, this means the cache becomes even more important
application in a way that provides both good performance and a because it’s used both to speed communication and to serve data
degree of separation from the way the UI is structured. from the cache if the back-end system is unreachable.
The ideal scenario from a performance and responsiveness By now, I believe you have a good understanding of the challenges
perspective is to make a single call to the back end to get all the involved in building such an application, so let’s move on to see
data you need for the presented screen. The problem with going how to solve those challenges.
this route is that you end up with a service interface that mimics
the smart client UI exactly. This is bad for a whole host of reasons. Queues Are One of My Favorite Things
Mainly, the UI is the most changeable part in an application. In Alexandria, there’s no RPC communication between the front
Tying the service interface to the UI in this fashion results in end and the back end. Instead, as shown in Figure 3, all commu-
frequent changes to the service, driven by purely UI changes. nication is handled via one-way messages going through queues.
Queues provide a rather elegant way of solving the communication
and a message-oriented
scenarios is hard), you can let the queuing subsystem handle all of that.
Using queues is quite simple. You ask your local queuing sub-
communication model.
system to send a message to some queue. The queuing subsystem
takes ownership of the message and ensures that it reaches its
destination at some point. Your application, however, doesn’t wait for
That, in turn, means deployment of the application just got a lot the message to reach its destination and can carry on doing its work.
harder. You have to deploy both the front end and the back end at If the destination queue is not currently available, the queuing
the same time, and trying to support multiple versions at the same subsystem will wait until the destination queue becomes available
time is likely to result in greater complexity. In addition, the service again, then deliver the message. The queuing subsystem usually
interface can’t be used to build additional UIs or as an integration
point for third-party or additional services. User Interface
If you try going the other route—building a standard, fine-grained
interface—you’ll run head on into the fallacies (a fine-grained
interface leads to a high number of remote calls, resulting in issues
with latency, reliability and bandwidth). Application Server
NHibernate &
The answer to this challenge is to break away from the common Rhino Service Bus
RPC model. Instead of exposing methods to be called remotely, let’s
use a local cache and a message-oriented communication model.
Figure 2 shows how you pack several requests from the front
end to the back end. This allows you to make a single remote call,
but keep a programming model on the server side that isn’t tightly
coupled to the needs of the UI. Queue Queue
To increase responsiveness, you can include a local cache that can an-
swer some queries immediately, leading to a more-responsive application.
One of the things you have to consider in these scenarios is what
types of data you have and the freshness requirements for any data
you display. In the Alexandria application, I lean heavily on the Figure 3 The Alexandria Communication Model
and destinations.
TransportOnMessageProcessingCompleted(
CurrentMessageInformation currentMessageInformation,
Exception exception) {
if (currentSession != null)
This simply updates the application model with the data from currentSession.Dispose();
the message. One thing, however, should be noted: the consume currentSession = null;
}
method is not called on the UI thread. Instead, it’s called on a back-
ground thread. The application model is bound to the UI, however, private bool TransportOnMessageArrived(
CurrentMessageInformation currentMessageInformation) {
so updating it must happen on the UI thread. The UpdateFrom
method is aware of that and will switch to the UI thread to update if (currentSession == null)
currentSession = sessionFactory.OpenSession();
the application model in the correct thread. return false;
The code for handling the other messages on both the back }
}
end and the front end is similar. This communication is purely
62 msdn magazine Smart Client
Imagine...
...an intranet employees want to use
Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics, the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc. All other trademarks or registered trademarks are the property of their respective owner(s).
var array = responses.ToArray(); OREN EINI (who works under the pseudonym Ayende Rahien) is an active member of
if (array.Length == 0) several open source projects (NHibernate and Castle among them) and is the founder
return; of many others (Rhino Mocks, NHibernate Query Analyzer and Rhino Commons
bus.ConsumeMessages(array); among them). Eini is also responsible for the NHibernate Profiler (nhprof.com), a
}
visual debugger for NHibernate. You can follow Eini’s work at ayende.com/blog.
66 msdn magazine Smart Client
Untitled-3 1 6/8/10 11:39 AM
C# 4.0
New C# Features in
the .NET Framework 4
Chris Burrows
Since its initial release in 2002, the C# programming lan- IEnumerable<T> and IEnumerator <T> represent, respectively,
guage has been improved to enable programmers to write clearer, an object that’s a sequence of T’s and the enumerator (or iterator) that
more maintainable code. The enhancements have come from the does the work of iterating the sequence. These interfaces have done
addition of features such as generic types, nullable value types, lambda a lot of heavy lifting for a long time, because they support the imple-
expressions, iterator methods, partial classes and a long list of other use- mentation of the foreach loop construct. In C# 3.0, they became even
ful language constructs. And, often, the changes were accompanied by more prominent because of their central role in LINQ and LINQ to
giving the Microsoft .NET Framework libraries corresponding support. Objects—they’re the .NET interfaces to represent sequences.
This trend toward increased usability continues in C# 4.0. The addi- So if you have a class hierarchy with, say, an Employee type and
tions make common tasks involving generic types, legacy interop and a Manager type that derives from it (managers are employees, after
working with dynamic object models much simpler. This article aims all), then what would you expect the following code to do?
to give a high-level survey of these new features. I’ll begin with generic IEnumerable<Manager> ms = GetManagers();
IEnumerable<Employee> es = ms;
variance and then look at the legacy and dynamic interop features.
It seems as though one ought to be able to treat a sequence of
Covariance and Contravariance Managers as though it were a sequence of Employees. But in C# 3.0, the
assignment will fail; the compiler will tell you there’s no conversion. After
Covariance and contravariance are best introduced with an example,
all, it has no idea what the semantics of IEnumerable<T> are. This could
and the best is in the framework. In System.Collections.Generic,
be any interface, so for any arbitrary interface IFoo<T>, why would an
IFoo<Manager> be more or less substitutable for an IFoo<Employee>?
This article discusses: In C# 4.0, though, the assignment works because IEnumerable<T>,
along with a few other interfaces, has changed, an alteration
• Covariance and contravariance
enabled by new support in C# for covariance of type parameters.
• Dynamic dispatch
IEnumerable<T> is eligible to be more special than the arbitrary
• Named arguments and optional properties IFoo<T> because, though it’s not obvious at first glance, members
• COM interop that use the type parameter T (GetEnumerator in IEnumerable<T>
Technologies discussed: and the Current property in IEnumerator<T>) actually use T only
C#, Microsoft .NET Framework 4, COM
in the position of a return value. So you only get a Manager out of
the sequence, and you never put one in.
68 msdn magazine
In contrast, think of List<T>. Making a List<Manager> substitutable So the language feature here is pretty simple to summarize:
for a List<Employee> would be a disaster, because of the following: You can add the keyword in or out whenever you define a type
List<Manager> ms = GetManagers(); parameter, and doing so gives you free extra conversions. There
List<Employee> es = ms; // Suppose this were possible
es.Add(new EmployeeWhoIsNotAManager()); // Uh oh are some limitations, though.
As this shows, once you think you’re looking at a List<Employee>, First, this works with generic interfaces and delegates only. You
you can insert any employee. But the list in question is actually a can’t declare a generic type parameter on a class or struct in this
List<Manager>, so inserting a non-Manager must fail. You’ve lost manner. An easy way to rationalize this is that delegates are very
type safety if you allow this. List<T> cannot be covariant in T. much like interfaces that have just one method, and in any case,
The new language feature in C# 4.0, then, is the ability to define classes would often be ineligible for this treatment because of fields.
types, such as the new IEnumerable<T>, that admit conversions You can think of any field on the generic class as being both an
among themselves when the type parameters in question bear input and an output, depending on whether you write to it or read
some relationship to one another. This is what the .NET Framework from it. If those fields involve type parameters, the parameters can
developers who wrote IEnumerable<T> used, and this is what their be neither covariant nor contravariant.
code looks like (simplified, of course): Second, whenever you have an interface or delegate with a
public interface IEnumerable<out T> { /* ... */ } covariant or contravariant type parameter, you’re granted new con-
Notice the out keyword modifying the definition of the type versions on that type only when the type arguments, in the usage of
parameter, T. When the compiler sees this, it will mark T as the interface (not its definition), are reference types. For instance,
covariant and check that, in the definition of the interface, all because int is a value type, the IEnumerator<int> doesn’t convert
uses of T are up to snuff (in other words, that they’re used in out to IEnumerator <object>, even though it looks like it should:
positions only—that’s why this keyword was picked). IEnumerator <int> J/ IEnumerator <object>
Why is this called covariance? Well, it’s easiest to see when The reason for this behavior is that the conversion must
you start to draw arrows. To be concrete, let’s use the Manager preserve the type representation. If the int-to-object conversion
and Employee types. Because there’s an inheritance relationship were allowed, calling the Current property on the result would be
between these classes, there’s an implicit reference conversion from impossible, because the value type int has a different representation
Manager to Employee: on the stack than an object reference does. All reference types
Manager J Employee have the same representation on the stack, however, so only type
And now, because of the annotation of T in IEnumerable<out T>, there’s arguments that are reference types yield these extra conversions.
also an implicit reference conversion from IEnumerable<Manager> Very likely, most C# developers will happily use this new lan-
to IEnumerable<Employee>. That’s what the annotation provides for: guage feature—they’ll get more conversions of framework types and
fewer compiler errors when using some types from the .NET
IEnumerable<Manager> J IEnumerable<Employee>
Framework (IEnumerable<T>, IComparable<T>, Func<T>,
This is called covariance, because the arrows in each of the Action<T>, among others). And, in fact, anyone designing a library with
two examples point in the same direction. We started with two generic interfaces and delegates is free to use the new in and out
types, Manager and Employee. We made new types out of them, type parameters when appropriate to make life easier for their users.
IEnumerable<Manager> and IEnumerable<Employee>. The new By the way, this feature does require support from the runtime—
types convert the same way as the old ones. but the support has always been there. It lay dormant for several
Contravariance is when this happens backward. You might releases, however, because no language made use of it. Also, previ-
anticipate that this could happen when the type parameter, T, is ous versions of C# allowed some limited conversions that were con-
used only as input, and you’d be right. For example, the System travariant. Specifically, they let you make delegates out of methods
namespace contains an interface called IComparable<T>, which that had compatible return types. In addition, array types have
has a single method called CompareTo: always been covariant. These existing features are distinct from the
public interface IComparable<in T> {
bool CompareTo(T other); new ones in C# 4.0, which actually let you define your own types that
} are covariant and contravariant in some of their type parameters.
If you have an IComparable<Employee>, you should be able to
treat it as though it were an IComparable<Manager>, because the Dynamic Dispatch
only thing you can do is put Employees in to the interface. Because On to the interop features in C# 4.0, starting with what is perhaps
a manager is an employee, putting a manager in should work, and the biggest change.
it does. The in keyword modifies T in this case, and this scenario C# now supports dynamic late-binding. The language has
functions correctly: always been strongly typed, and it continues to be so in version 4.0.
IComparable<Employee> ec = GetEmployeeComparer(); Microsoft believes this makes C# easy to use, fast and suitable for
IComparable<Manager> mc = ec;
all the work .NET programmers are putting it to. But there are times
This is called contravariance because the arrow got reversed
when you need to communicate with systems not based on .NET.
this time:
Traditionally, there were at least two approaches to this. The first
Manager J Employee was simply to import the foreign model directly into .NET as a proxy.
IComparable<Manager> I IComparable<Employee> COM Interop provides one example. Since the original release of the
msdnmagazine.com July 2010 69
.NET Framework, it has used this strategy with a tool called TLBIMP,
C# Dynamic IronPython IronRuby Dynamic APIs
which creates new .NET proxy types you can use directly from C#.
LINQ-to-SQL, shipped with C# 3.0, contains a tool called Rest of the
Dynamic Language Runtime (DLR) .NET Framework
SQLMETAL, which imports an existing database into C# proxy
classes for use with queries. You’ll also find a tool that imports
Common Language Runtime (CLR)
Windows Management Instrumentation (WMI) classes to C#.
Many technologies allow you to write C# (often with attributes)
and then perform interop using your handwritten code as basis for Figure 1 The DLR Runs on Top of the CLR
external actions, such as LINQ-to-SQL, Windows Communication
Foundation (WCF) and serialization. Essentially, C# 4.0 offers a simplified, consistent view of dynamic
The second approach abandons the C# type system entirely—you operations. To take advantage of it, all you need to do is specify that
embed strings and data in your code. This is what you do when- a given value is dynamic, ensuring that analysis of all operations
ever you write code that, say, invokes a method on a JScript object on the value will be delayed until run time.
or when you embed a SQL query in your ADO.NET application. In C# 4.0, dynamic is a built-in type, and a special pseudo-keyword
You’re even doing this when you defer binding to run time using signifies it. Note, however, that dynamic is different from var. Variables
reflection, even though the interop in that case is with .NET itself. declared with var actually do have a strong type, but the programmer
The dynamic keyword in C# is a response to dealing with has left it up to the compiler to figure it out. When the programmer
the hassles of these other approaches. Let’s start with a simple uses dynamic, the compiler doesn’t know what type is being used—
example—reflection. Normally, using it requires a lot of boilerplate the programmer leaves figuring it out up to the runtime.
infrastructure code, such as:
object o = GetObject(); Dynamic and the DLR
Type t = o.GetType(); The infrastructure that supports these dynamic operations at run
object result = t.InvokeMember("MyMethod",
BindingFlags.InvokeMethod, null, time is called the Dynamic Language Runtime (DLR). This new
o, new object[] { }); .NET Framework 4 library runs on the CLR, like any other man-
int i = Convert.ToInt32(result);
aged library. It’s responsible for brokering each dynamic operation
With the dynamic keyword, instead of calling a method
between the language that initiated it and the object it occurs on. If a
MyMethod on some object using reflection in this manner, you
dynamic operation isn’t handled by the object it occurs on, a runtime
can now tell the compiler to please treat o as dynamic and delay all
component of the C# compiler handles the bind. A simplified and
analysis until run time. Code that does that looks like this:
dynamic o = GetObject();
incomplete architecture diagram looks something like Figure 1.
int i = o.MyMethod(); The interesting thing about a dynamic operation, such as a
It works, and it accomplishes the same thing with code that’s dynamic method call, is that the receiver object has an oppor-
much less convoluted. tunity to inject itself into the binding at run time and can, as a
result, completely determine the semantics of any given dynamic
operation. For instance, take a look at the following code:
C# 4.0 offers a simplified, dynamic d = new MyDynamicObject();
d.Bar("Baz", 3, d);
consistent view of If MyDynamicObject was defined as shown here, then you can
imagine what happens:
dynamic operations. class MyDynamicObject : DynamicObject {
public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args, out object result) {
The value of this shortened, simplified C# syntax is perhaps more Console.WriteLine("Method: {0}", binder.Name);
foreach (var arg in args) {
clear if you look at the ScriptObject class that supports operations Console.WriteLine("Argument: {0}", arg);
on a JScript object. The class has an InvokeMember method that has }
more and different parameters, except in Silverlight, which actu- result = args[0];
ally has an Invoke method (notice the difference in the name) with return true;
}
fewer parameters. Neither of these are the same as what you’d need }
to invoke a method on an IronPython or IronRuby object or on In fact, the code prints:
any number of non-C# objects you might come into contact with. Method: Bar
Argument: Baz
In addition to objects that come from dynamic languages, you’ll Argument: 3
find a variety of data models that are inherently dynamic and Argument: MyDynamicObject
have different APIs supporting them, such as HTML DOMs, the By declaring d to be of type dynamic, the code that consumes
System.Xml DOM and the XLinq model for XML. COM objects the MyDynamicObject instance effectively opts out of compile-
are often dynamic and can benefit from the delay to run time of time checking for the operations d participates in. Use of dynamic
some compiler analysis. means “I don’t know what type this is going to be, so I don’t know
70 msdn magazine C# 4.0
what methods or properties there are right now. Compiler, please Because the results of the two indexing expressions are dynamic,
let them all through and then figure it out when you really have the index itself is as well. And because the result of the index
an object at run time.” So the call to Bar compiles even though is dynamic, so is the call to Foo. Then you’re confronted with
the compiler doesn’t know what it means. Then at run time, the converting a dynamic value to a string. That happens dynamically,
object itself is asked what to do with this call to Bar. That’s what of course, because the object could be a dynamic one that wants to
TryInvokeMember knows how to handle. perform some special computation in the face of a conversion request.
Now, suppose that instead of a MyDynamicObject, you used a Notice in the previous examples that C# allows implicit
Python object: conversions from any dynamic expression to any type. The
dynamic d = GetPythonObject(); conversion to string at the end is implicit and did not require an
d.bar("Baz", 3, d);
explicit cast operation. Similarly, any type can be converted to
If the object is the file listed here, then the code also works, and dynamic implicitly.
the output is much the same:
def bar(*args):
Dynamic in C# is all
print "Method:", bar.__name__
for x in args:
print "Argument:", x
Under the covers, for each use of a dynamic value, the compiler
generates a bunch of code that initializes and uses a DLR CallSite. about consuming and using
That CallSite contains all the information needed to bind at run
time, including such things as the method name, extra data, such dynamic objects.
as whether the operation takes place in a checked context, and
information about the arguments and their types.
In this respect, dynamic is a lot like object, and the similarities
This code, if you had to maintain it, would be every bit as ugly
don’t stop there. When the compiler emits your assembly and needs
as the reflection code shown earlier or the ScriptObject code or
to emit a dynamic variable, it does so by using the type object and
strings that contain XML queries. That’s the point of the dynamic
then marking it specially. In some sense, dynamic is kind of an alias
feature in C#—you don’t have to write code like that!
for object, but it adds the extra behavior of dynamically resolving
When using the dynamic keyword, your code can look pretty
operations when you use it.
much the way you want: like a simple method invocation, a call to
You can see this if you try to convert between generic types that
an indexer, an operator, such as +, a cast or even compounds, like +=
differ only in dynamic and object; such conversions will always
or ++. You can even use dynamic values in statements—for example,
work, because at runtime, an instance of List<dynamic> actually
if(d) and foreach(var x in d). Short-circuiting is also supported, with
is an instance of List<object>:
code such as d && ShortCircuited or d ?? ShortCircuited.
List<dynamic> ld = new List<object>();
The value of having the DLR provide a common infrastructure
You can also see the similarity between dynamic and object if you
for these sorts of operations is that you’re no longer having to deal
try to override a method that’s declared with an object parameter:
with a different API for each dynamic model you’d like to code
class C {
against—there’s just a single API. And you don’t even have to use public override bool Equals(dynamic obj) {
it. The C# compiler can use it for you, and that should give you /* ... */
}
more time to actually write the code you want—the less infrastruc- }
ture code you have to maintain means more productivity for you. Although it resolves to a decorated object in your assembly, I
The C# language provides no shortcuts for defining dynamic do like to think of dynamic as a real type, because it serves as a re-
objects. Dynamic in C# is all about consuming and using dynamic minder that you can do most things with it that you can do with
objects. Consider the following: any other type. You can use it as a type argument or, say, as a re-
dynamic list = GetDynamicList(); turn value. For instance, this function definition will let you use
dynamic index1 = GetIndex1();
dynamic index2 = GetIndex2(); the result of the function call dynamically without having to put
string s = list[++index1, index2 + 10].Foo(); its return value in a dynamic variable:
This code compiles, and it contains a lot of dynamic operations. public dynamic GetDynamicThing() {
First, there’s the dynamic pre-increment on index1, then the dynamic /* ... */ }
add with index2. Then a dynamic indexer get is called on list. The There are a lot more details about the way dynamic is treated
product of those operations calls the member Foo. Finally, the total and dispatched, but you don’t need to know them to use the
result of the expression is converted to a string and stored in s. That’s feature. The essential idea is that you can write code that looks like
five dynamic operations in one line, each dispatched at run time. C#, and if any part of the code you write is dynamic, the compiler
The compile-time type of each dynamic operation is itself will leave it alone until run time.
dynamic, and so the “dynamicness” kind of flows from com- I want to cover one final topic concerning dynamic: failure.
putation to computation. Even if you hadn’t included dynamic Because the compiler can’t check whether the dynamic thing you’re
expressions multiple times, there still would be a number of using really has the method called Foo, it can’t give you an error.
dynamic operations. There are still five in this one line: Of course, that doesn’t mean that your call to Foo will work at run
string s = nonDynamicList[++index1, index2 + 10].Foo(); time. It may work, but there are a lot of objects that don’t have a
msdnmagazine.com July 2010 71
method called Foo. When your expression fails to bind at run time, programming and something as simple as the SaveAs method on the
the binder makes its best attempt to give you an exception that’s Document interface. This method has 16 parameters, all of which
more or less exactly what the compiler would’ve told you if you are optional. With previous versions of C#, if you want to call this
hadn’t used dynamic to begin with. method you have to write code that looks like this:
Consider the following code: Document d = new Document();
object filename = "Foo.docx";
try
object missing = Type.Missing;
{
d.SaveAs(ref filename, ref missing, ref missing, ref missing, ref
dynamic d = "this is a string";
missing, ref missing, ref missing, ref missing, ref missing, ref
d.Foo();
missing, ref missing, ref missing, ref missing, ref missing, ref
}
missing, ref missing);
catch (Microsoft.CSharp.RuntimeBinder.RuntimeBinderException e)
{ Now, you can write this:
Console.WriteLine(e.Message); Document d = new Document();
} d.SaveAs(FileName: "Foo.docx");
Here I have a string, and strings clearly do not have a method I would say that’s an improvement for anyone who works with
called Foo. When the line that calls Foo executes, the binding will APIs like this. And improving the lives of programmers who need
fail and you’ll get a RuntimeBinderException. This is what the to write Office programs was definitely a motivating factor for
previous program prints: adding named arguments and optional parameters to the language.
'string' does not contain a definition for 'Foo'
Now, when writing a .NET library and considering adding
Which is exactly the error message you, as a C# programmer, expect. methods that have optional parameters, you’re faced with a choice.
You can either add optional parameters or you can do what C#
Named Arguments and Optional Parameters programmers have done for years: introduce overloads. In the
In another addition to C#, methods now support optional parameters Car.Accelerate example, the latter decision might lead you to
with default values so that when you call such a method you can produce a type that looks like this:
omit those parameters. You can see this in action in this Car class: class Car {
public void Accelerate(uint speed) {
class Car {
Accelerate(speed, null, false);
public void Accelerate(
}
double speed, int? gear = null,
public void Accelerate(uint speed, int? gear) {
bool inReverse = false) {
Accelerate(speed, gear, false);
}
/* ... */
public void Accelerate(uint speed, int? gear,
}
bool inReverse) {
}
/* ... */
You can call the method this way: }
Car myCar = new Car(); }
myCar.Accelerate(55); Selecting the model that suits the library you’re writing is up
This has exactly the same effect as: to you. Because C# hasn’t had optional parameters until now, the
myCar.Accelerate(55, null, false); .NET Framework (including the .NET Framework 4) tends to use
It’s the same because the compiler will insert all the default overloads. If you decide to mix and match overloads with optional
values that you omit. parameters, the C# overload resolution has clear tie-breaking rules
C# 4.0 will also let you call methods by specifying some argu- to determine which overload to call under any given circumstances.
ments by name. In this way, you can pass an argument to an optional
parameter without having to also pass arguments for all the
parameters that come before it.
Say you want to call Accelerate to go in reverse, but you don’t
Selecting the model that suits the
want to specify the gear parameter. Well, you can do this:
myCar.Accelerate(55, inReverse: true);
library you’re writing is up to you.
This is a new C# 4.0 syntax, and it’s the same as if you had written:
Indexed Properties
myCar.Accelerate(55, null, true);
In fact, whether or not parameters in the method you’re calling
Some smaller language features in C# 4.0 are supported only when
are optional, you can use names when passing arguments. For
writing code against a COM interop API. The Word interop in the
instance, these two calls are permissible and identical to one another:
Console.WriteLine(format: "{0:f}", arg0: 6.02214179e23);
previous illustration is one example.
Console.WriteLine(arg0: 6.02214179e23, format: "{0:f}"); C# code has always had the notion of an indexer that you
If you’re calling a method that takes a long list of parameters, you can add to a class to effectively overload the [] operator on
can even use names as a sort of in-code documentation to help you instances of that class. This sense of indexer is also called a default
remember which parameter is which. indexer, since it isn’t given a name and calling it requires no name.
On the surface, optional arguments and named parameters Some COM APIs also have indexers that aren’t default, which is
don’t look like interop features. You can use them without ever to say that you can’t effectively call them simply by using []—you
even thinking about interop. However, the motivation for these must specify a name. You can, alternatively, think of an indexed
features comes from the Office APIs. Consider, for example, Word property as a property that takes some extra arguments.
72 msdn magazine C# 4.0
C# 4.0 supports indexed properties on COM interop types. You of deploying COM interop assemblies with your application.
can’t define types in C# that have indexed properties, but you can When COM interop was introduced in the original version of
use them provided you’re doing so on a COM type. For an example the .NET Framework, the notion of a Primary Interop Assembly
of what C# code that does this looks like, consider the Range (PIA) was created. This was an attempt to solve the problem of
property on an Excel worksheet: sharing COM objects among components. If you had different
using Microsoft.Office.Interop.Excel; interop assemblies that defined an Excel Worksheet, we wouldn’t
class Program { be able to share these Worksheets between components, because
static void Main(string[] args) { they would be different .NET types. The PIA fixed this by existing
Application excel = new Application();
excel.Visible = true; only once—all clients used it, and the .NET types always matched.
out to be a headache.
}
}
In this example, Range[“A1”, “C3”] isn’t a property called Range
that returns a thing that can be indexed. It’s one call to a Range
accessor that passes A1 and C3 with it. And although Value might Though a fine idea on paper, in practice deploying a PIA turns
not look like an indexed property, it, too, is one! All of its arguments out to be a headache, because there’s only one, and multiple
are optional, and because it’s an indexed property, you omit them by applications could try to install or uninstall it. Matters are complicated
not specifying them at all. Before the language supported indexed because PIAs are often large, Office doesn’t deploy them with default
properties, you would have written the call like this: Office installations, and users can circumvent this single assembly sys-
ws.get_Range("A1", "C3").Value2 = 123; tem easily just by using TLBIMP to create their own interop assembly.
Here, Value2 is a property that was added simply because the So now, in an effort to fix this situation, two things have happened:
indexed property Value wouldn’t work prior to C# 4.0. • The runtime has been given the smarts to treat two structurally
identical COM interop types that share the same identifying
Omitting the Ref Keyword at COM Call Sites characteristics (name, GUID and so on) as though they were
Some COM APIs were written with many parameters passed by refer- actually the same .NET type.
ence, even when the implementation doesn’t write back to them. In the • The C# compiler takes advantage of this by simply reproducing
Office suite, Word stands out as an example—its COM APIs all do this. the interop types in your own assembly when you compile,
When you’re confronted with such a library and you need to pass removing the need for the interop assembly to exist at run time.
arguments by reference, you can no longer pass any expression that’s I have to omit some details in the interest of space, but even
not a local variable or field, and that’s a big headache. In the Word without knowledge of the details, this is another feature—like
SaveAs example, you can see this in action—you had to declare a dynamic—you should be able to use without a problem. You tell
local called filename and a local called missing just to call the SaveAs the compiler to embed interop types for you in Visual Studio by
method, since those parameters needed to be passed by reference. setting the Embed Interop Types property on your reference to true.
Document d = new Document();
Because the C# team expects this to be the preferred method of
object filename = "Foo.docx"; referencing COM assemblies, Visual Studio will set this property
object missing = Type.Missing;
d.SaveAs(ref filename, ref missing, // ...
to True by default for any new interop reference added to a C#
You may have noticed in the new C# code that followed, I no project. If you’re using the command-line compiler (csc.exe) to
longer declared a local for filename: build your code, then to embed interop types you must reference
d.SaveAs(FileName: "Foo.docx"); the interop assembly in question using the /L switch rather than /R.
This is possible because of the new omit ref feature for COM Each of the features I’ve covered in this article could itself
interop. Now, when calling a COM interop method, you can generate much more discussion, and the topics all deserve articles
pass any argument by value instead of by reference. If you do, the of their own. I’ve omitted or glossed over many details, but I hope
compiler will create a temporary local on your behalf and pass this serves as a good starting point for exploring C# 4.0 and you
that local by reference for you if required. Of course, you won’t be find time to investigate and make use of these features. And if you
able to see the effect of the method call if the method mutates the do, I hope you enjoy the benefits in productivity and program read-
argument—if you want that, pass the argument by ref. ability they were designed to give you.
This should make code that uses APIs like this much cleaner.
CHRIS BURROWS is a developer at Microsoft on the C# compiler team. He imple-
mented dynamic in the C# compiler and has been involved with the development
Embedding COM Interop Types of Visual Studio for nine years.
This is more of a C# compiler feature than a C# language feature, but
now you can use a COM interop assembly without that assembly THANKS to the following technical expert for reviewing this article:
having to be present at run time. The goal is to reduce the burden Eric Lippert
msdnmagazine.com July 2010 73
D E S I G N PAT T E R N S
Windows Presentation Foundation (WPF) and Silverlight layer? How are related properties within the Model handled
provide rich APIs for building modern applications, but under- elegantly? How should you expose collections within the Model to
standing and applying all the WPF features in harmony with the View? Where should ViewModel objects be instantiated and
each other to build well-designed and easily maintained apps hooked up to Model objects?
can be difficult. Where do you start? And what is the right way to In this article I’ll explain how the ViewModel works, and discuss
compose your application? some benefits and issues involved in implementing a ViewModel
The Model-View-ViewModel (MVVM) design pattern in your code. I’ll also walk you through some concrete examples
describes a popular approach for building WPF and Silverlight of using ViewModel as a document manager for exposing Model
applications. It’s both a powerful tool for building applications objects in the View layer.
and a common language for discussing application design
with developers. While MVVM is a really useful pattern, it’s still Model, ViewModel and View
relatively young and misunderstood. Every WPF and Silverlight application I’ve worked on so far had
When is the MVVM design pattern applicable, and when is it the same high-level component design. The Model was the core of
unnecessary? How should the application be structured? How the application, and a lot of effort went into designing it according
much work is the ViewModel layer to write and maintain, and what to object-oriented analysis and design (OOAD) best practices.
alternatives exist for reducing the amount of code in the ViewModel For me the Model is the heart of the application, representing
the biggest and most important business asset because it cap-
This article discusses: tures all the complex business entities, their relationships and
their functionality.
• Model, ViewModel, and View
Sitting atop the Model is the ViewModel. The two primary goals
• Why use a ViewModel?
of the ViewModel are to make the Model easily consumable by the
• Using dynamic properties WPF/XAML View and to separate and encapsulate the Model from
• A document manager adapter the View. These are excellent goals, although for pragmatic reasons
Technologies discussed: they’re sometimes broken.
Windows Presentation Foundation, Silverlight
You build the ViewModel knowing how the user will interact
with the application at a high level. However, it’s an important part
74 msdn magazine
of the MVVM design pattern that the ViewModel knows nothing pattern. As a result, if a Model class has 10 properties that need to
about the View. This allows the interaction designers and graphics be exposed in the View, the ViewModel typically ends up having
artists to create beautiful, functional UIs on top of the ViewModel 10 identical properties that simply proxy the call to the underlying
while working closely with the developers to design a suitable View- model instance. These proxy properties usually raise a property-
Model to support their efforts. In addition, decoupling between changed event when set to indicate to the View that the property
View and ViewModel also allows the ViewModel to be more unit has been changed.
testable and reusable. Not every Model property needs to have a ViewModel proxy
To help enforce a strict separation between the Model, View and property, but every Model property that needs to be exposed in
ViewModel layers, I like to build each layer as a separate Visual the View will typically have a proxy property. The proxy properties
Studio project. Combined with the reusable utilities, the main usually look like this:
executable assembly and any unit testing projects (you have plenty public string Description {
get {
of these, right?), this can result in a lot of projects and assemblies, return this.UnderlyingModelInstance.Description;
as illustrated in Figure 1. }
set {
Given the large number of projects, this strict-separation this.UnderlyingModelInstance.Description = value;
approach is obviously most useful on large projects. For small this.RaisePropertyChangedEvent("Description");
}
applications with only one or two developers, the benefits of this }
strict separation may not outweigh the inconvenience of creating, Any non-trivial application will have tens or hundreds of Model
configuring and maintaining multiple projects, so simply separating classes that need to be exposed to the user through the ViewModel
your code into different namespaces within the same project may in this fashion. This is simply intrinsic to the separation provided
provide more than sufficient isolation. by MVVM.
Writing and maintaining a ViewModel is not trivial and it should
of the application.
and when is it unnecessary—is often found in your domain model.
In large projects, the domain model may be very complex, with
hundreds of classes carefully designed to work elegantly together
for any type of application, including Web services, WPF or ASP. Writing these proxy properties is boring and therefore error-
NET applications. The Model may comprise several assemblies prone, especially because raising the property-changed event
working together, and in very large organizations the domain model is requires a string that must match the name of the property (and will
sometimes built and maintained by a specialized development not be included in any automatic code refactoring). To eliminate
team. these proxy events, the common solution is to expose the model
When you have a large and complex domain model, it’s almost instance from the ViewModel wrapper directly, then have the
always beneficial to introduce a ViewModel layer. domain model implement the INotifyPropertyChanged interface:
On the other hand, sometimes the domain model is simple, perhaps public class SomeViewModel {
public SomeViewModel( DomainObject domainObject ) {
nothing more than a thin layer over the database. The classes Contract.Requires(domainObject!=null,
may be automatically generated and they frequently implement "The domain object to wrap must not be null");
this.WrappedDomainObject = domainObject;
INotifyPropertyChanged. The UI is commonly a collection of }
lists or grids with edit forms allowing the user to manipulate the public DomainObject WrappedDomainObject {
get; private set;
underlying data. The Microsoft toolset has always been very good }
at building these kinds of applications quickly and easily. ...
If your model or application falls into this category, a ViewModel Thus, the ViewModel can still expose the commands and addi-
would probably impose unacceptably high overhead without tional properties required by the view without duplicating Model
sufficiently benefitting your application design. properties or creating lots of proxy properties. This approach
That said, even in these cases the ViewModel can still provide value. certainly has its appeal, especially if the Model classes already
For example, the ViewModel is an excellent place to implement undo implement the INotifyPropertyChanged interface. Having
functionality. Alternatively, you can choose to use MVVM for a portion the model implement this interface isn’t necessarily a bad thing
of the application (such as document management, as I’ll discuss later) and it was even common with Microsoft .NET Framework 2.0
and pragmatically expose your Model directly to the View. and Windows Forms applications. It does clutter up the domain
model, though, and wouldn’t be useful for ASP.NET applications
Why Use a ViewModel? or domain services.
If a ViewModel seems appropriate for your application, there are With this approach the View has a dependency on the Model,
still questions to be answered before you start coding. One of the but it’s only an indirect dependency through data binding, which
first is how to reduce the number of proxy properties. does not require a project reference from the View project to the
The separation of the View from the Model promoted by the Model project. So for purely pragmatic reasons this approach is
MVVM design pattern is an important and valuable aspect of the sometimes useful.
msdnmagazine.com July 2010 75
However, this approach does violate the spirit of the MVVM The method starts by using reflection to find the property on
design pattern, and it reduces your ability to introduce new View- the underlying Model instance. (For more details, see the June
Model-specific functionality later (such as undo capabilities). I’ve 2007 “CLR Inside Out” column “Reflections on Reflection” at
encountered scenarios with this approach that caused a fair bit of msdn.microsoft.com/magazine/cc163408.) If the model doesn’t have such
rework. Imagine the not-uncommon situation where there’s a data a property, then the method fails by returning false and the data
binding on a deeply nested property. If the Person ViewModel is binding fails. If the property exists, the method uses the property
the current data context, and the Person has an Address, the data information to retrieve and return the Model’s property value. This
binding might look something like this: is more work than the traditional proxy property’s get method, but
{Binding WrappedDomainObject.Address.Country} this is the only implementation you need to write for all models
If you ever need to introduce additional ViewModel function- and all properties.
ality on the Address object, you’ll need to remove data binding The real power of the dynamic proxy property approach is in
references to WrappedDomainObject.Address and instead use the property setters. In TrySetMember, you can include common
new ViewModel properties. This is problematic because updates logic such as raising property-changed events. The code looks
to the XAML data binding (and possibly the data contexts as well) something like this:
are hard to test. The View is the one component that doesn’t have public override bool TrySetMember(
automated and comprehensive regression tests. SetMemberBinder binder, object value) {
at a high level. }
Again, the method starts by using reflection to grab the property
from the underlying Model instance. If the property doesn’t exist
or the property is read-only, the method fails by returning false. If the
Dynamic Properties
property exists on the domain object, the property information is used
My solution to the proliferation of proxy properties is to use the
to set the Model property. Then you can include any logic common to
new .NET Framework 4 and WPF support for dynamic objects
all property setters. In this sample code I simply raise the
and dynamic method dispatch. The latter allows you to determine
property-changed event for the property I just set, but you can
at run time how to handle reading or writing to a property that
easily do more.
does not actually exist on the class. This means you can eliminate
One of the challenges of encapsulating a Model is that the
all the handwritten proxy properties in the ViewModel while still
Model frequently has what Unified Modeling Language calls
encapsulating the underlying model. Note, however, that Silverlight
derived properties. For example, a Person class probably has a Birth-
4 does not support binding to dynamic properties.
Date property and a derived Age property. The Age property is read-
The simplest way to implement this capability is to have the View-
only and automatically calculates the age based on the birth date
Model base class extend the new System.Dynamic.DynamicObject
and the current date:
class and override the TryGetMember and TrySetMember methods.
public class Person : DomainObject {
The Dynamic Language Runtime (DLR) calls these two methods public DateTime BirthDate {
when the property being referenced does not exist on the class, get; set;
}
allowing the class to determine at run time how to implement the
missing properties. Combined with a small amount of reflection, public int Age {
get {
the ViewModel class can dynamically proxy the property access var today = DateTime.Now;
to the underlying model instance in only a few lines of code: // Simplified demo code!
int age = today.Year - this.BirthDate.Year;
public override bool TryGetMember(
return age;
GetMemberBinder binder, out object result) {
}
}
string propertyName = binder.Name;
...
PropertyInfo property =
this.WrappedDomainObject.GetType().GetProperty(propertyName); When the BirthDate property changes, the Age property also
if( property==null || property.CanRead==false ) {
implicitly changes because the age is derived mathematically from
result = null; the birth date. So when the BirthDate property is set, the View-
return false;
}
Model class needs to raise a property-changed event for both the
BirthDate property and the Age property. With the dynamic View-
result = property.GetValue(this.WrappedDomainObject, null);
return true;
Model approach, you can do this automatically by making this
} inter-property relationship explicit within the model.
76 msdn magazine Design Patterns
First, you need a custom attribute to capture the property to introduce a ViewModel around the Address, you simply add a
relationship: new property on the Person ViewModel class. The new Address
[AttributeUsage(AttributeTargets.Property, AllowMultiple=true)] property is very simple:
public sealed class AffectsOtherPropertyAttribute : Attribute {
public DynamicViewModel Address {
public AffectsOtherPropertyAttribute(
get {
string otherPropertyName) {
if( addressViewModel==null )
this.AffectsProperty = otherPropertyName;
addressViewModel =
}
new DynamicViewModel(this.Person.Address);
return addressViewModel;
public string AffectsProperty {
}
get;
}
private set;
}
private DynamicViewModel addressViewModel;
}
I set AllowMultiple to true to support scenarios where a property No XAML data bindings need to be changed because the property is
can affect multiple other properties. Applying this attribute to still called Address, but now the DLR calls the new concrete property
codify the relationship between BirthDate and Age directly in the rather than the dynamic TryGetMember method. (Notice that the
model is straightforward: lazy instantiation within this Address property is not thread-safe.
[AffectsOtherProperty("Age")] However, only the View should be accessing the ViewModel and the
public DateTime BirthDate { get; set; } WPF/Silverlight view is single-threaded, so this is not a concern.)
To use this new model metadata within the dynamic ViewModel This approach can be used even when the model implements
class, I can now update the TrySetMember method with three INotifyPropertyChanged. The ViewModel can notice this and
additional lines of code, so it looks like this: choose not to proxy property-changed events. In this case, it listens
public override bool TrySetMember( for them from the underlying model instance and then re-raises the
SetMemberBinder binder, object value) {
... events as its own. In the constructor of the dynamic ViewModel
var affectsProps = property.GetCustomAttributes( class, I perform the check and remember the result:
typeof(AffectsOtherPropertyAttribute), true);
foreach(AffectsOtherPropertyAttribute otherPropertyAttr public DynamicViewModel(DomainObject model) {
in affectsProps) Contract.Requires(model != null,
this.RaisePropertyChanged( "Cannot encapsulate a null model");
otherPropertyAttr.AffectsProperty); this.ModelInstance = model;
}
// Raises its own property changed events
With the reflected property information already in hand, the Get- if( model is INotifyPropertyChanged ) {
CustomAttributes method can return any AffectsOtherProperty this.ModelRaisesPropertyChangedEvents = true;
var raisesPropChangedEvents =
attributes on the model property. Then the code simply loops over model as INotifyPropertyChanged;
the attributes, raising property-changed events for each one. So raisesPropChangedEvents.PropertyChanged +=
(sender,args) =>
changes to the BirthDate property through the ViewModel now this.RaisePropertyChanged(args.PropertyName);
automatically raise both BirthDate and Age property-changed events. }
}
It’s important to realize that if you explicitly program a property on To prevent duplicate property-changed events, I also need to
the dynamic ViewModel class (or, more likely, on model-specific de- make a slight modification to the TrySetMember method.
rived ViewModel classes), the DLR will not call the TryGetMember if( this.ModelRaisesPropertyChangedEvents==false )
and TrySetMember methods and will instead call the properties this.RaisePropertyChanged(property.Name);
directly. In that case, you lose this automatic behavior. However, So you can use a dynamic proxy property to dramatically simplify
the code could easily be refactored so that custom properties could the ViewModel layer by eliminating standard proxy properties.
use this functionality as well.
Let’s return to the problem of the
The Core <<library>> <<unit tests>>
data binding on a deeply nested Application View (XAML) WpfUtilities WpfUtilities Tests
property (where the ViewModel
is the current WPF data context)
<<unit tests>>
that looks like this: View Model ViewModel Tests
{Binding WrappedDomainObject.
Address.Country}
Main Executable
<<unit tests>>
Using dynamic proxy proper- Domain Model Model Tests
ties means the underlying wrapped
domain object is no longer ex-
<<unit tests>>
posed, so the data binding would Data Access Layer Data Access Layer Tests
actually look like this:
{Binding Address.Country}
In this case, the Address proper- <<library>> <<unit tests>>
ty would still access the underlying Utilities Utilities Tests
model Address instance directly.
However, now when you want Figure 1 The Components of an MVVM Application
msdnmagazine.com July 2010 77
This significantly reduces coding,
Third Party Tabbed Workspace & Docking Controls
testing, documentation and long-
term maintenance. Adding new
properties to the model no longer
DocumentManagerAdapter
requires updating the ViewModel
layer unless there is very special View
View logic for the new property. ViewModel
Additionally, this approach can <<Singleton>>
solve difficult issues like related DocumentManager Document
0..*
properties. The common TrySet-
Member method could also help
you implement an undo capability ViewAllClientsDocument ClientDocument MortgageDocument ... etc. ...
because user-driven property
changes all flow through the Try- Figure 2 Document Manager View Adapter
SetMember method.
Collections
Pros and Cons Collections are one of the most difficult and least satisfactory
Many developers are leery of reflection (and the DLR) because of aspects of the MVVM design pattern. If a collection in the under-
performance concerns. In my own work I haven’t found this to be lying Model is changed by the Model, it’s the responsibility of the
a problem. The performance penalty for the user when setting a ViewModel to somehow expose the change so that the View can
single property in the UI is not likely to be noticed. That may update itself appropriately.
not be the case in highly interactive UIs, such as multi-touch Unfortunately, in all likelihood the Model does not expose
design surfaces. collections that implement the INotifyCollectionChanged
The only major performance issue is in the initial population interface. In the .NET Framework 3.5, this interface is in the
of the view when there are a large number of fields. Usability System.Windows.dll, which strongly discourages its use in the
concerns should naturally limit the number of fields you’re Model. Fortunately, in the .NET Framework 4, this interface has
exposing on any screen so that the performance of the initial data migrated to System.dll, making it much more natural to use
bindings through this DLR approach is undetectable. observable collections from within the Model.
Nevertheless, performance should always be carefully monitored Observable collections in the Model open up new possi-
and understood as it relates to the user experience. The simple bilities for Model development and could be used in Win-
approach previously described could be rewritten with reflection dows Forms and Silverlight applications. This is currently
caching. For additional details, see Joel Pobar’s article in the July my preferred approach because it’s much simpler than any-
2005 issue of MSDN Magazine (msdn.microsoft.com/magazine/cc163759). thing else, and I’m happy the INotifyCollectionChanged
There is some validity to the argument that code readability interface is moving to a more common assembly.
and maintainability are negatively affected using this approach Without observable collections in the Model, the best that can be
because the View layer seems to be referencing properties on done is to expose some other mechanism—most likely custom events—
the ViewModel that don’t actually exist. However, I believe the on the Model to indicate when the collection has changed. This should
benefits of eliminating most of the hand-coded proxy properties be done in a Model-specific way. For example, if the Person class had
far outweigh the problems, especially with proper documentation a collection of addresses it could expose events such as:
on the ViewModel. public event EventHandler<AddressesChangedEventArgs>
NewAddressAdded;
The dynamic proxy property approach does reduce or eliminate public event EventHandler<AddressesChangedEventArgs>
the ability to obfuscate the Model layer because the properties on AddressRemoved;
the Model are now referenced by name in the XAML. Using tradi- This is preferable to raising a custom collection event de-
tional proxy properties does not limit your ability to obfuscate the signed specifically for the WPF ViewModel. However, it’s still
Model because the properties are referenced directly and would difficult to expose collection changes in the ViewModel. Likely,
be obfuscated with the rest of the application. However, as most the only recourse is to raise a property-changed event on the
obfuscation tools do not yet work with XAML/BAML, this is largely entire ViewModel collection property. This is an unsatisfactory
irrelevant. A code cracker can start from the XAML/BAML and solution at best.
work into the Model layer in either case. Another problem with collections is determining when or if to
Finally, this approach could be abused by attributing model wrap each Model instance in the collection within a ViewModel
properties with security-related metadata and expecting the View- instance. For smaller collections, the ViewModel may expose a
Model to be responsible for enforcing security. Security doesn’t seem new observable collection and copy everything in the underlying
like a View-specific responsibility, and I believe this is placing too many Model collection into the ViewModel observable collection,
responsibilities on the ViewModel. In this case, an aspect-oriented wrapping each Model item in the collection in a corresponding
approach applied within the Model would be more suitable. ViewModel instance as it goes. The ViewModel might need to
78 msdn magazine Design Patterns
listen for collection-changed events to transmit user changes back Document Manager Adapter
to the underlying Model. The adapter design shown in Figure 2 ensures that the ViewModel
However, for very large collections that will be exposed in doesn’t require any reference to the View, so it respects the main
some form of virtualizing panel, the easiest and most pragmatic goals of the MVVM design pattern. (However, in this case, the
approach is just to expose the Model objects directly. concept of a document is defined in the ViewModel layer rather
than the Model layer because it’s purely a UI concept.)
Instantiating the ViewModel
Another problem with the MVVM design pattern that’s seldom
discussed is where and when the ViewModel instances should The real power of the dynamic
be instantiated. This problem is also frequently overlooked in
discussions of similar design patterns such as MVC. proxy property approach is in
My preference is to write a ViewModel singleton that provides
the main ViewModel objects from which the View can easily the property setters.
retrieve all other ViewModel objects as required. Often this master
ViewModel object provides the command implementations so
the View can support opening of documents. The ViewModel document manager is responsible for maintaining
However, most of the applications I’ve worked with provide a the collection of open ViewModel documents and knowing which
document-centric interface, usually using a tabbed workspace similar document is currently active. This design allows the ViewModel lay-
to Visual Studio. So in the ViewModel layer I want to think in terms er to open and close documents using the document manager, and to
of documents, and the documents expose one or more ViewModel change the active document without any knowledge of the View. The
objects wrapping particular Model objects. Standard WPF commands ViewModel side of this approach is reasonably straightforward. The
in the ViewModel layer can then use the persistence layer to retrieve ViewModel classes in the sample application are shown in Figure 3.
the necessary objects, wrap them in ViewModel instances and create The Document base class exposes several internal lifecycle methods
ViewModel document managers to display them. (Activated, LostActivation and DocumentClosed) that are called
In the sample application included with this article, the ViewModel by the document manager to keep the document up-to-date
command for creating a new Person is: about what’s going on. The document also implements an INotify-
internal class OpenNewPersonCommand : ICommand { PropertyChanged interface so that it can support data binding. For
... example, the adapter data binds the view document’s Title property
// Open a new person in a new window.
public void Execute(object parameter) { to the ViewModel’s DocumentTitle property.
var person = new MvvmDemo.Model.Person(); The most complex piece of this approach is the adapter class,
var document = new PersonDocument(person);
DocumentManager.Instance.ActiveDocument = document; and I’ve provided a working copy in the project accompanying this
} article. The adapter subscribes to events on the document manager
}
and uses those events to keep the tabbed-workspace control
The ViewModel document manager
referenced in the last line is a singleton
that manages all open ViewModel doc-
uments. The question is, how does the
collection of ViewModel documents get
exposed in the View?
The built-in WPF tab control does not
provide the kind of powerful multiple-
document interface users have come to
expect. Fortunately, third-party docking
and tabbed-workspace products are avail-
able. Most of them strive to emulate the
tabbed document look of Visual Studio,
including the dockable tool windows,
split views, Ctrl+Tab pop-up windows
(with mini-document views) and more.
Unfortunately, most of these compo-
nents don’t provide built-in support for
the MVVM design pattern. But that’s
OK, because you can easily apply the
Adapter design pattern to link the View-
Model document manager to the third-
party view component. Figure 3 The ViewModel Layer’s Document Manager and Document Classes
msdnmagazine.com July 2010 79
Figure 4 Linking the View Control and ViewModel Document Figure 5 Setting the Attached Property
private static readonly DependencyProperty private AvalonDock.DocumentContent CreateNewViewDocument(
ViewModelDocumentProperty = Document viewModelDocument) {
DependencyProperty.RegisterAttached(
"ViewModelDocument", typeof(Document), var viewDoc = new AvalonDock.DocumentContent();
typeof(DocumentManagerAdapter), null); viewDoc.DataContext = viewModelDocument;
viewDoc.Content = viewModelDocument;
private static Document GetViewModelDocument(
AvalonDock.ManagedContent viewDoc) { Binding titleBinding = new Binding("DocumentTitle") {
Source = viewModelDocument };
return viewDoc.GetValue(ViewModelDocumentProperty)
as Document; viewDoc.SetBinding(AvalonDock.ManagedContent.TitleProperty,
} titleBinding);
viewDoc.Closing += OnUserClosingDocument;
private static void SetViewModelDocument( DocumentManagerAdapter.SetViewModelDocument(viewDoc,
AvalonDock.ManagedContent viewDoc, Document document) { viewModelDocument);
up-to-date. For example, when the document manager indicates I’ve used this ViewModel document-manager approach with
that a new document has been opened, the adapter receives an both WPF and Silverlight successfully. The only View layer code
event, wraps the ViewModel document in whatever WPF control is the adapter, and this can be tested easily and then left alone. This
is required and then exposes that control in the tabbed workspace. approach keeps the ViewModel completely independent of the
The adapter has one other responsibility: keeping the View- View, and I have on one occasion switched vendors for my tabbed
Model document manager synchronized with the user’s actions. The workspace component with only minimal changes in the adapter
adapter must therefore also listen for events from the tabbed work- class and absolutely no changes to the ViewModel or Model.
space control so that when the user changes the active document or The ability to work with documents in the ViewModel layer feels
closes a document the adapter can notify the document manager. elegant, and implementing ViewModel commands like the one I
While none of this logic is very complex, there are some caveats. demonstrated here is easy. The ViewModel document classes also
There are several scenarios where the code becomes re-entrant, and become obvious places to expose ICommand instances related to
this must be handled gracefully. For example, if the ViewModel the document.
uses the document manager to close a document, the adapter will
DynamicPDF Viewer
O u r n e w, c u s t o m i z a b l e
DynamicPDF Viewer allows you
to display PDF documents within
any WinForm application. No longer
rely on an external viewer for displaying
your PDF documents. DynamicPDF Viewer
utilizes the proven reliable and efficient
Foxit PDF viewing engine and maximizes
performance and compatibility with our other
DynamicPDF products.
DynamicPDF Converter
Our DynamicPDF Converter library can efficiently
convert over 30 document types (including HTML and
all common Office file formats) to PDF. Events can be
used to manage the action taken on a successful or failed
conversion. It is highly intuitive and flexible and
Try our three integrates well with our other DynamicPDF products.
DynamicPDF Rasterizer
new products Our DynamicPDF Rasterizer library can quickly convert PDF
documents to over 10 common image formats including
FREE today! multi-page TIFF. Rasterizing form field values as well as
annotations is fully supported. PDFs can also be rasterized
to a System.Drawing.Bitmap class for further manipulation.
Fully functional and never
expiring evaluation To learn more about these or any of our other popular tools:
editions available at DynamicPDF Generator, DynamicPDF Merger, DynamicPDF ReportWriter,
www.cete.com/download DynamicPDF Suite, DynamicPDF WebCache or Firemail, visit us online.
ceTe Software has been delivering quality software applications and components to our customers for over 10 years. Our
DynamicPDF product line has proven our commitment to delivering innovative software components and our ability to
respond to the changing needs of software developers. We back our products with a first class support team trained to
provide timely, accurate and thorough responses to any support needs.
SECURITY BRIEFS BRYAN SULLIVAN
Effectively managing user state in Web applications can be a decode and deserialize this string using the limited object
tricky balancing act of performance, scalability, maintainability serialization (LOS) formatter class System.Web.UI.LosFormatter:
and security. The security consideration is especially evident when LosFormatter formatter = new LosFormatter();
object viewstateObj = formatter.Deserialize("/
you’re managing user state stored on the client. I have a colleague wEPDwULLTE2MTY2ODcyMjkPFgIeCHBhc3N3b3JkBQlzd29yZGZpc2hkZA==");
who used to say that handing state data to a client is like handing A quick peek in the debugger (see Figure 1) reveals that the dese-
an ice cream cone to a 5-year-old: you may get it back, but you rialized view state object is actually a series of System.Web.UI.Pair
definitely can’t expect to get it back in the same shape it was when objects ending with a System.Web.UI.IndexedString object with a
you gave it out! value of “password” and a corresponding string value of “swordfish.”
In this month’s column, we’ll examine some security implications
around client-side state management in ASP.NET applications;
specifically, we’re going to look at view state security. (Please note: Encryption does not provide
this article assumes that you’re familiar with the concept of ASP.NET
view state. If not, check out “Understanding ASP.NET View State” defense against tampering.
by Scott Mitchell at msdn.microsoft.com/library/ms972976).
If you don’t think there’s any data stored in your applications’ Even with encrypted data, it’s still
view state worth protecting, think again. Sensitive information
can find its way into view state without you even realizing it. And possible for an attacker to flip
even if you’re vigilant about preventing sensitive information loss
through view state, an attacker can still tamper with that view state bits in the encrypted data.
and cause even bigger problems for you and your users. Luckily,
ASP.NET has some built-in defenses against these attacks. Let’s take If you don’t want to go to the trouble of writing your own code
a look at how these defenses can be used correctly. to deserialize view state objects, there are several good view state
decoders available for free download on the Internet, including Fritz
Threat No. 1: Information Disclosure Onion’s ViewState Decoder tool available at alt.pluralsight.com/tools.aspx.
At Microsoft, development teams use the STRIDE model to classify
threats. STRIDE is a mnemonic that stands for: Encrypting View State
• Spoofing In “The Security Development Lifecycle: SDL: A Process for
• Tampering Developing Demonstrably More Secure Software” (Microsoft Press,
• Repudiation 2006), Michael Howard and Steve Lipner discuss technologies that
• Information Disclosure can be used to mitigate STRIDE threats. Figure 2 shows threat
• Denial of Service types and their associated mitigation techniques.
• Elevation of Privilege Because we’re dealing with an information disclosure threat to
The main two STRIDE categories of concern from the view state our data stored in the view state, we need to apply a confidentiality
security perspective are Information Disclosure and Tampering mitigation technique; the most effective confidentiality mitigation
(although a successful tampering attack can lead to a possible Elevation technology in this case is encryption.
of Privilege; we’ll discuss that in more detail later). Information disclo- ASP.NET version 2.0 has a built-in feature to enable encryp-
sure is the simpler of these threats to explain, so we’ll discuss that first. tion of view state—the ViewStateEncryptionMode property,
One of the most unfortunately persistent misconceptions around which can be enabled either through a page directive or in the
view state is that it is encrypted or somehow unreadable by the user. application’s web.config file:
After all, a view state string certainly doesn’t look decomposable: <%@ Page ViewStateEncryptionMode="Always" %>
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/
wEPDwULLTE2MTY2ODcyMjkPFgIeCHBhc3N3b3JkBQlzd29yZGZpc2hkZA==" /> Or
<configuration>
However, this string is merely base64-encoded, not encrypted <system.web>
with any kind of cryptographically strong algorithm. We can easily <pages viewStateEncryptionMode="Always">
82 msdn magazine
Figure 1 Secret View State Data Revealed by the Debugger
There are three possible values for ViewStateEncryptionMode: You can explicitly set both the cryptographic algorithm and
Always (the view state is always encrypted); Never (the view state the key to use in the machineKey element of your application’s
is never encrypted); and Auto (the view state is only encrypted if web.config file:
one of the page’s controls explicitly requests it). The Always and <configuration>
<system.web>
Never values are pretty self-explanatory, but Auto requires a little <machineKey decryption="AES" decryptionKey="143a…">
more explanation. For the encryption algorithm, you can choose AES (the default
If a server control persists sensitive information into its page’s value), DES or 3DES. Of these, DES is explicitly banned by the
view state, the control can request that the page encrypt the Microsoft SDL Cryptographic Standards, and 3DES is strongly dis-
view state by calling the Page.RegisterRequiresViewState- couraged. I recommend that you stick with AES for maximum security.
Encryption method (note that in this case the entire view Once you’ve selected an algorithm, you need to create a key.
state is encrypted, not just the view state corresponding to the However, remember that the strength of this system’s security
control that requested it): depends on the strength of that key. Don’t use your pet’s name, your
public class MyServerControl : WebControl
{
significant other’s birthday or any other easily guessable value! You
protected override void OnInit(EventArgs e) need to use a cryptographically strong random number. Here’s
{
Page.RegisterRequiresViewStateEncryption();
a code snippet to create one in the format that the machineKey
base.OnInit(e); element expects (hexadecimal characters only) using the .NET
}
...
RNGCryptoServiceProvider class:
} RNGCryptoServiceProvider csp = new RNGCryptoServiceProvider();
byte[] data = new byte[24];
However, there is a caveat. The reason the method is named csp.GetBytes(data);
RegisterRequiresViewStateEncryption, and not something like string value = String.Join("", BitConverter.ToString(data).Split('-'));
EnableViewStateEncryption, is because the page can choose to At a minimum, you should generate 16-byte random values
ignore the request. If the page’s ViewStateEncryptionMode is set for your keys; this is the minimum value allowed by the SDL
to Auto (or Always), the control’s request will be granted and the Cryptographic Standards. The maximum length supported for AES
view state will be encrypted. If ViewStateEncryptionMode is set keys is 24 bytes (48 hex chars) in the Microsoft .NET Framework
to Never, the control’s request will be ignored and the view state 3.5 and earlier, and 32 bytes (64 hex chars) in the .NET Framework
will be unprotected. 4. DES supports a maximum key length of only 8 bytes and 3DES a
This is definitely something to be aware of if you’re a control maximum of 24 bytes, regardless of the framework version. Again,
developer. You should consider keeping potentially sensitive I recommend that you avoid these algorithms and use AES instead.
information out of the view state (which is always a good idea). In
extreme cases where this isn’t possible, you might consider over- Threat No. 2: Tampering
riding the control’s SaveViewState and LoadViewState methods to Tampering is the other significant threat. You might think the same
manually encrypt and decrypt the view state there. encryption defense that keeps attackers from prying into the view state
would also prevent them from changing it, but this is wrong. Encryp-
Server Farm Considerations tion doesn’t provide defense against tampering: Even with encrypted
In a single-server environment, it’s sufficient just to enable View- data, it’s still possible for an attacker to flip bits in the encrypted data.
StateEncryptionMode, but in a server farm environment there’s
some additional work to do. Symmetric encryption algorithms— Figure 2 Techniques to Mitigate STRIDE Threats
like the ones that ASP.NET uses to encrypt the view state—require Threat Type Mitigation Technique
a key. You can either explicitly specify a key in the web.config file, Spoofing Authentication
or ASP.NET can automatically generate a key for you. Again, in Tampering Integrity
a single-server environment it’s fine to let the framework handle
Repudiation Non-repudiation services
key generation, but this won’t work for a server farm. Each server
Information Disclosure Confidentiality
will generate its own unique key, and requests that get load
Denial of Service Availability
balanced between different servers will fail because the decryp-
tion keys won’t match. Elevation of Privilege Authorization
discouraged. I recommend
into a state graph object.
2. The state graph is serialized into a binary format.
maximum security.
byte array.
c. The hash is appended to the end of the serialized byte array.
3. The serialized byte array is encoded into a base-64 string.
4. The base-64 string is written to the __VIEWSTATE form value Any page that has its view state MAC-disabled is potentially
in the page. vulnerable to a cross-site scripting attack against the __VIEW-
Whenever this page is posted back to the server, the page code STATE parameter. The first proof-of-concept of this attack was
validates the incoming __VIEWSTATE by taking the incoming developed by David Byrne of Trustwave, and demonstrated by
state graph data (deserialized from the __VIEWSTATE value), Byrne and his colleague Rohini Sulatycki at the Black Hat DC
adding the same secret key value, and recomputing the hash conference in February 2010. To execute this attack, the attacker
value. If the new recomputed hash value matches the hash value crafts a view state graph where the malicious script code he wants
supplied at the end of the incoming __VIEWSTATE, the view to execute is set as the persisted value of the innerHtml property of
state is considered valid and processing proceeds (see Figure 3). the page’s form element. In XML form, this view state graph would
Otherwise, the view state is considered to have been tampered with look something like Figure 4.
and an exception is thrown. The attacker then base-64 encodes the malicious view state
The security of this system lies in the secrecy of the secret key and appends this string as the value of a __VIEWSTATE query
value. This value is always stored on the server, either in memory string parameter for the vulnerable page. For example, if the page
84 msdn magazine Security Briefs
Figure 4 XML Code for View State MAC Attack HMACSHA384 or HMACSHA512. Of these choices, MD5 is
<viewstate>
explicitly banned by the SDL Crypto Standards and 3DES is
<Pair> strongly discouraged. SHA1 is also discouraged, but for .NET
<Pair>
<String>…</String>
Framework 3.5 and earlier applications it’s your best option. .NET
<Pair> Framework 4 applications should definitely be configured with either
<ArrayList>
<Int32>0</Int32>
HMACSHA512 or HMACSHA256 as the validation algorithm.
<Pair> After you choose a MAC algorithm, you’ll also need to manually
<ArrayList>
<Int32>1</Int32>
specify the validation key. Remember to use cryptographically
<Pair> strong random numbers: if necessary, you can refer to the key
<ArrayList>
<IndexedString>innerhtml</IndexedString>
generation code specified earlier. You should use at least 128-byte
<String>…malicious script goes here…</String> validation keys for either HMACSHA384 or HMACSHA512, and
</ArrayList>
</Pair>
at least 64-byte keys for any other algorithm.
</ArrayList>
</Pair>
</ArrayList>
You Can’t Hide Vulnerable View State
</Pair> Unlike a vulnerable file permission or database command that
</Pair>
</Pair>
may be hidden deep in the server-side code, vulnerable view state
</viewstate> is easy to find just by looking for it. If an attacker wanted to test a
page to see whether its view state was protected, he could simply
home.aspx on the site www.contoso.com was known to have view state make a request for that page himself and pull the base-64 encoded
MAC disabled, the attack URI would be http://www.contoso.com/ view state value from the __VIEWSTATE form value. If the
home.aspx?__VIEWSTATE=/w143a... LosFormatter class can successfully deserialize that value, then
All that remains is to trick a potential victim into following this it has not been encrypted. It’s a little trickier—but not much—to
link. Then the page code will deserialize the view state from the determine whether view state MAC has been applied.
incoming __VIEWSTATE query string parameter and write the The MAC is always applied to the end of the view state value, and
malicious script as the innerHtml of the form. When the victim since hash sizes are constant for any given hash algorithm, it’s fairly
gets the page, the attacker’s script will immediately execute in the easy to determine whether a MAC is present. If HMACSHA512
victim’s browser, with the victim’s credentials. has been used, the MAC will be 64 bytes; if HMACSHA384 has
This attack is especially dangerous because it completely bypasses been used, it will be 48 bytes, and if any other algorithm has been
all of the usual cross-site scripting (XSS) defenses. The XSS Filter used it will be 32 bytes. If you strip 32, 48 or 64 bytes off of the end
in Internet Explorer 8 will not block it. The ValidateRequest feature of the base-64 decoded view state value, and any of these deseri-
of ASP.NET will block several common XSS attack vectors, but it alize with LosFormatter into the same object as before, then view
does not deserialize and analyze incoming view state, so it’s also state MAC has been applied. If none of these trimmed view state
no help in this situation. The Microsoft Anti-Cross Site Scripting byte arrays will successfully deserialize, then view state MAC hasn’t
(Anti-XSS) Library (now included as part of the Microsoft Web been applied and the page is vulnerable.
Protection Library) is even more effective against XSS than Casaba Security makes a free tool for developers called Watcher
ValidateRequest; however, neither the Anti-XSS Library input that can help automate this testing. Watcher is a plug-in for Eric
sanitization features nor its output encoding features will protect Lawrence’s Fiddler Web debugging proxy tool, and it works by
against this attack either. The only real defense is to ensure that passively analyzing the HTTP traffic that flows through the proxy.
view state MAC is consistently applied to all pages. It will flag any potentially vulnerable resources that pass through—
for example, an .aspx page with a __VIEWSTATE missing a MAC.
More Server Farm Considerations If you’re not already using both Fiddler and Watcher as part of your
Similar to ViewStateEncryptionMode, there are special consider- testing process, I highly recommend giving them a try.
ations with EnableViewStateMac when deploying applications in
a server farm environment. The secret value used for the view state Wrapping Up
hash must be constant across all machines in the farm, or the view View state security is nothing to take lightly, especially consider-
state validation will fail. ing the new view state tampering attacks that have recently been
You can specify both the validation key and the HMAC algorithm demonstrated. I encourage you to take advantage of the ViewState-
to use in the same location where you specify the view state encryption EncryptionMode and EnableViewStateMac security mechanisms
key and algorithm—the machineKey element of the web.config file: built into ASP.NET.
<configuration>
<system.web>
<machineKey validation="AES" validationKey="143a..."> BRYAN SULLIVAN is a security program manager for the Microsoft Security
Development Lifecycle team, where he specializes in Web application security
If your application is built on the .NET Framework 3.5 or earlier,
issues. He’s the author of “Ajax Security” (Addison-Wesley, 2007).
you can choose SHA1 (the default value), AES, MD5 or 3DES as
the MAC algorithm. If you’re running .NET Framework 4, you THANKS to the following technical expert for reviewing this article:
can also choose MACs from the SHA-2 family: HMACSHA256, Michael Howard
msdnmagazine.com July 2010 85
Untitled-4 2 6/8/10 11:38 AM
Untitled-4 3 6/8/10 11:38 AM
THE WORKING PROGRAMMER TED NEWARD
Last time, I continued my exploration of MongoDB via the use of in the foreach loop), contains a Documents property that’s an
exploration tests. I described how to start and stop the server during IEnumerable<Document>. If the query would return too large a
a test, then showed how to capture cross-document references and set of data, the ICursor can be limited to return the first n results
discussed some of the reasoning behind the awkwardness of doing by setting its Limit property to n.
so. Now it’s time to explore some more intermediate MongoDB The predicate query syntax comes in four different flavors,
capabilities: predicate queries, aggregate functions and the LINQ shown in Figure 2.
support provided by the MongoDB.Linq assembly. I’ll also provide In the second and third forms, “this” always refers to the object
some notes about hosting MongoDB in a production environment. being examined.
You can send any arbitrary command (that is, ECMAScript code)
When We Last Left Our Hero . . . through the driver to the database, in fact, using documents to convey
For reasons of space, I won’t review much of the previous articles; the query or command. So, for example, the Count method provided
instead, you can read them online in the May and June issues by the IMongoCollection interface is really just a convenience around
at msdn.microsoft.com/magazine . In the associated code bundle, this more verbose snippet:
however, the exploration tests have been fleshed out to include a [TestMethod]
public void CountGriffins()
pre-existing sample set of data to work with, using characters from {
one of my favorite TV shows. Figure 1 shows a previous exploration var resultDoc = db["exploretests"].SendCommand(
new Document()
test, by way of refresher. So far, so good. .Append("count", "familyguy")
.Append("query",
Calling All Old People . . . );
new Document().Append("lastname", "Griffin"))
In previous articles, the client code has fetched either all documents Assert.AreEqual(6, (double)resultDoc["n"]);
}
that match a particular criteria (such as having a “lastname” field
This means that any of the aggregate operations described by
matching a given String or an “_id” field matching a particular Oid),
the MongoDB documentation, such as “distinct” or “group,” for
but I haven’t discussed how to do predicate-style queries (such as
example, are accessible via the same mechanism, even though they
“find all documents where the ‘age’ field has a value higher than
may not be surfaced as methods on the MongoDB.Driver APIs.
18”). As it turns out, MongoDB doesn’t use a SQL-style interface
You can send arbitrary commands outside of a query to the
to describe the query to execute; instead, it uses ECMAScript/
database via the “special-name” syntax “$eval,” which allows any
JavaScript, and it can in fact accept blocks of code to execute on
legitimate ECMAScript block of code to be executed against the
the server to filter or aggregate data, almost like a stored procedure.
server, again essentially as a stored procedure:
This provides some LINQ-like capabilities, even before looking
[TestMethod]
at the LINQ capabilities supported by the Mongo.Linq assembly. public void UseDatabaseAsCalculator()
By specifying a document containing a field named “$where” and a {
var resultDoc = db["exploretests"].SendCommand(
code block describing the ECMAScript code to execute, arbitrarily new Document()
complex queries can be created: .Append("$eval",
new CodeWScope {
[TestMethod] Value = "function() { return 3 + 3; }",
public void Where() Scope = new Document() }));
{ TestContext.WriteLine("eval returned {0}", resultDoc.ToString());
ICursor oldFolks = Assert.AreEqual(6, (double)resultDoc["retval"]);
db["exploretests"]["familyguy"].Find( }
new Document().Append("$where",
new Code("this.gender === 'F'"))); Or, use the provided Eval function on the database directly.
bool found = false; If that isn’t flexible enough, MongoDB permits the storage of
foreach (var d in oldFolks.Documents)
found = true; user-defined ECMAScript functions on the database instance
Assert.IsTrue(found, "Found people");
}
As you can see, the Find call returns an ICursor instance, Code download available at code.msdn.microsoft.com/mag201007WorkProg.
which, although itself isn’t IEnumerable (meaning it can’t be used
88 msdn magazine
Figure 1 An Example Exploration Test cantly better. Documentation of the new features and examples will
[TestMethod]
be in the wiki of the project site (wiki.github.com/samus/mongodb-csharp/).
public void StoreAndCountFamilyWithOid()
{
var oidGen = new OidGenerator();
Shipping Is a Feature
var peter = new Document(); Above all else, if MongoDB is going to be used in a production environ-
peter["firstname"] = "Peter";
peter["lastname"] = "Griffin";
ment, a few things need to be addressed to make it less painful for the poor
peter["_id"] = oidGen.Generate(); chaps who have to keep the production servers and services running.
var lois = new Document();
To begin, the server process (mongod.exe) needs to be installed
lois["firstname"] = "Lois"; as a service—running it in an interactive desktop session is typically
lois["lastname"] = "Griffin";
lois["_id"] = oidGen.Generate();
not allowed on a production server. To that end, mongod.exe
supports a service install option, “--install,” which installs it as a
peter["spouse"] = lois["_id"];
lois["spouse"] = peter["_id"];
service that can then be started either by the Services panel or the
command line: “net start MongoDB.” However, as of this writing,
var cast = new[] { peter, lois };
var fg = db["exploretests"]["familyguy"];
there’s one small quirk in the --install command—it infers the path
fg.Insert(cast); to the executable by looking at the command line used to execute
Assert.AreEqual(peter["spouse"], lois["_id"]);
it, so the full path must be given on the command line. This means
Assert.AreEqual( that if MongoDB is installed in C:\Prg\mongodb, you must install
fg.FindOne(new Document().Append("_id",
peter["spouse"])).ToString(), lois.ToString());
it as a service at a command prompt (with administrative rights)
with the command C:\Prg\mongodb\bin\mongod.exe --install.
Assert.AreEqual(2,
fg.Count(new Document().Append("lastname", "Griffin")));
However, any command-line parameters, such as “--dbpath, ”
} must also appear in that installation command, which means if
any of the settings—port, path to the data files and so on—change,
the service must be reinstalled. Fortunately, MongoDB supports a
for execution during queries and server-side execution blocks by configuration file option, given by the “--config” command-line
adding ECMAScript functions to the special database collection option, so typically the best approach is to pass the full config file path
“system.js,” as described on the MongoDB Web site (MongoDB.org). to the service install and do all additional configuration from there:
C:\Prg\mongodb\bin\mongod.exe --config C:\Prg\mongodb\bin\mongo.cfg --install
HOW TO BE
A GOOD BOSS
You expect a lot from your development team. As their boss,
give them the tools they need to meet your business objectives
on time and on budget.
VSLive! offers 70 conference sessions and workshops over five
code-packed days. Each session is filled with actionable,
applicable knowledge — automatically making your team
more efficient and productive.
This year, VSLive! is being held on the Microsoft campus.
Attendees will have unprecedented access to the Visual Studio
development team for questions, tips, and expert advice on
how to harness the power of Visual Studio 2010.
Be a good boss. Have your team check out the full agenda
online at vslive.com/agenda.
Let them tell you why they should be there and how your
investment in them will help your bottom line.
made to Silverlight as it was That’s where the markup goes for the BeforeLoaded, AfterLoaded
and BeforeUnloaded states in the LayoutStates group.
being adapted from WPF. The fade-in is the easier of the two jobs. When an item is first
added to the visual tree, it’s said to be “loaded” into the visual tree.
Prior to being loaded, the item has a visual state of BeforeLoaded,
and then the visual state becomes AfterLoaded.
The first two filled Rectangle objects are used to provide back- There are several ways to define the fade-in. The first requires
ground shading for mouse-over and selection (respectively). The third initializing the Opacity to 0 in the Grid tag:
displays a stroked rectangle to indicate input focus. The visibility of these <Grid Name="rootGrid" Opacity="0" ... >
rectangles is controlled by the VSM markup. Notice how each visual You then provide an animation for the AfterLoaded state to
group gets its own element to manipulate. The ContentPresenter hosts increase the Opacity property to 1 over the course of 1 second:
the item as it’s displayed in the ListBox. Generally, the content of the <VisualState x:Name="AfterLoaded">
<Storyboard>
ContentPresenter is another visual tree defined in a DataTemplate <DoubleAnimation Storyboard.TargetName="rootGrid"
that’s set to the ItemTemplate property of ListBox. Storyboard.TargetProperty="Opacity"
To="1" Duration="0:0:1" />
The VSM markup consists of elements of type VisualStateManager. </Storyboard>
VisualStateGroups, VisualStateGroup and VisualState, all with an </VisualState>
if (ItemContainerStyle != null)
The crucial method is GetContainerForItemOverride. This container.Style = ItemContainerStyle;
method returns the object used to wrap each item. ItemsControl
return container;
returns ContentPresenter, but ListBox returns ListBoxItem, and }
that’s what FluidableItemsControl returns as well. This ListBoxItem
protected override bool IsItemItsOwnContainerOverride(object item)
must have a style applied, and for that reason FluidableItemsControl {
also defines the same ItemContainerStyle property as ListBox. return item is ListBoxItem;
}
The other method that should be implemented is IsItemItsOwn- }
ContainerOverride. If the item in the ItemsControl is already the }
same type as its container (in this case, a ListBoxItem), then there’s
no reason to put it in another container. Now you can set a List- be drastically simplified. It doesn’t need logic for mouse-over, selec-
BoxItem style definition to the ItemContainerStyle property of tion or input focus, so all those visual states can be eliminated, as well
FluidableItemsControl. The template within the style definition can as the three Rectangle objects.
The FluidItemsControl program shows the result. It’s pretty much
Figure 2 An Excerpt from the the same as FluidListBox but with all the ListBox selection logic
ListBoxItem Template in FluidListBox absent. The default panel for ItemsControl is a StackPanel, so that’s
another simplification. To compensate for these simplifications,
<ControlTemplate TargetType="ListBoxItem">
<Grid Name="rootGrid" Background="{TemplateBinding Background}"> I’ve enhanced the animations for loading and unloading items.
<VisualStateManager.VisualStateGroups> Now there’s an animation on the PlaneProjection transform that
<!-- Additions to standard template --> makes it appear as if the items are swiveling into and out of view.
<VisualStateGroup x:Name="LayoutStates">
Rejectionists Rejected
96 msdn magazine
SharePoint 2010
from the experts.
From beginner to
professional, we have
SharePoint 2010
covered.