0% found this document useful (0 votes)
195 views102 pages

MSDN Mag 0710

MSDN_MAG_0710

Uploaded by

Alexandre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
195 views102 pages

MSDN Mag 0710

MSDN_MAG_0710

Uploaded by

Alexandre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

THE MICROSOFT JOURNAL FOR DEVELOPERS JULY 2010 VOL 25 NO 7

OFFICE ADD-INS COLUMNS


3 Solutions for Accessing SharePoint Data in Office 2010 EDITOR’S NOTE
Donovan Follette and Paul Stubbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Over-Educated, Yet
Under-Qualified?
SHAREPOINT SECURITY Keith Ward page 4
Trim SharePoint Search Results for Better Security CUTTING EDGE
Ashley Elenjickal and Pooja Harjani . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Expando Objects in C# 4.0
Dino Esposito page 6
ONENOTE 2010 DATA POINTS
Creating OneNote 2010 Extensions with the OneNote Object Model Windows Azure Table
Andy Gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Storage–Not Your Father’s Database
Julie Lerman page 16
OFFICE SERVICES SECURITY BRIEFS
Merging Word Documents on the Server Side with SharePoint 2010
Manvir Singh and Ankush Bhatia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 View State Security
Bryan Sullivan page 82

SMART CLIENT THE WORKING PROGRAMMER


Going NoSQL with MongoDB,
Building Distributed Apps with NHibernate and Rhino Service Bus
Oren Eini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Part 3
Ted Neward page 88

C# 4.0 UI FRONTIERS
The Fluid UI in Silverlight 4
New C# Features in the .NET Framework 4
Chris Burrows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Charles Petzold page 92

DON’T GET ME STARTED


DESIGN PATTERNS Rejectionists Rejected
Problems and Solutions with Model-View-ViewModel David Platt page 96
Robert McCarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Untitled-5 2 3/5/10 10:16 AM
Sure, Visual Studio 2010 has a lot of great functionality—
we’re excited that it’s only making our User Interface
components even better! We’re here to help you go
beyond what Visual Studio 2010 gives you so you can create
Killer Apps quickly, easily and without breaking a sweat! Go
to infragistics.com/beyondthebox today to expand your
toolbox with the fastest, best-performing and most powerful
UI controls available. You’ll be surprised
by your own strength!
Infragistics Sales 800 231 8588
Infragistics Europe Sales +44 (0) 800 298 9055
Infragistics India +91-80-6785-1111
twitter.com/infragistics

Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics and the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc.

Untitled-5 3 3/5/10 10:16 AM


JULY 2010 VOLUME 25 NUMBER 7 magazine
LUCINDA ROWLEY Director
DIEGO DAGUM Editorial Director/[email protected]
KERI GRASSL Site Manager

KEITH WARD Editor in Chief/[email protected]


TERRENCE DORSEY Technical Editor
DAVID RAMEL Features Editor
WENDY GONCHAR Managing Editor
MARTI LONGWORTH Associate Managing Editor

SCOTT SHULTZ Creative Director


JOSHUA GOULD Art Director
ALAN TAO Senior Graphic Designer
CONTRIBUTING EDITORS K. Scott Allen, Dino Esposito, Julie Lerman, Juval
Lowy, Dr. James McCaffrey, Ted Neward, Charles Petzold, David S. Platt

Henry Allain President, Redmond Media Group


Matt Morollo Vice President, Publishing
Doug Barney Vice President, Editorial Director
Michele Imgrund Director, Marketing
Tracy Cook Online Marketing Director
ADVERTISING SALES: 508-532-1418/[email protected]
Matt Morollo VP, Publishing
Chris Kourtoglou Regional Sales Manager
William Smith National Accounts Director
Danna Vedder Microsoft Account Manager
Jenny Hernandez-Asandas Director Print Production
Serena Barnes Production Coordinator/[email protected]

Neal Vitale President & Chief Executive Officer


Richard Vitale Senior Vice President & Chief Financial Officer
Michael J. Valenti Executive Vice President
Abraham M. Langer Senior Vice President, Audience Development & Digital Media
Christopher M. Coates Vice President, Finance & Administration
Erik A. Lindgren Vice President, Information Technology & Application Development
Carmel McDonagh Vice President, Attendee Marketing
David F. Myers Vice President, Event Operations
Jeffrey S. Klein Chairman of the Board

MSDN Magazine (ISSN 1528-4859) is published monthly by 1105 Media, Inc., 9201 Oakdale Avenue,
Ste. 101, Chatsworth, CA 91311. Periodicals postage paid at Chatsworth, CA 91311-9998, and at
additional mailing offices. Annual subscription rates payable in U.S. funds: U.S. $35; Canada $45;
International $60. Single copies/back issues: U.S. $10, all others $12. Send orders with payment
to: MSDN Magazine, P.O. Box 3167, Carol Stream, IL 60132, e-mail [email protected] or
call 847-763-9560. POSTMASTER: Send address changes to MSDN Magazine, P.O. Box 2166, Skokie,
IL 60076. Canada Publications Mail Agreement No: 40612608. Return Undeliverable Canadian
Addresses to Circulation Dept. or IMS/NJ. Attn: Returns, 310 Paterson Plank Road, Carlstadt, NJ 07072.

Printed in the U.S.A. Reproductions in whole or part prohibited except by written permission. Mail requests
to “Permissions Editor,” c/o MSDN Magazine, 16261 Laguna Canyon Road, Ste. 130, Irvine, CA 92618.

Legal Disclaimer: The information in this magazine has not undergone any formal testing by 1105 Media,
Inc. and is distributed without any warranty expressed or implied. Implementation or use of any information
contained herein is the reader’s sole responsibility. While the information has been reviewed for accuracy,
there is no guarantee that the same or similar results may be achieved in all environments. Technical
inaccuracies may result from printing errors and/or new developments in the industry.

Corporate Address: 1105 Media,Inc.,9201 Oakdale Ave.,Ste 101,Chatsworth,CA 91311,www.1105media.com

Media Kits: Direct your Media Kit requests to Matt Morollo, VP Publishing, 508-532-1418 (phone),
508-875-6622 (fax), [email protected]

Reprints: For single article reprints (in minimum quantities of 250-500), e-prints, plaques and posters contact:
PARS International, Phone: 212-221-9595, E-mail: [email protected], www.magreprints.com/
QuickQuote.asp

List Rental: This publication’s subscriber list, as well as other lists from 1105 Media, Inc., is available
for rental. For more information, please contact our list manager, Merit Direct. Phone: 914-368-1000;
E-mail: [email protected]; Web: www.meritdirect.com/1105

All customer service inquiries should be sent to [email protected] or call 847-763-9560.

Printed in the USA


Your best source for
software development tools!
®

LEADTOOLS Recognition SDK Multi-EditX NEW VMware vSphere


by LEAD Technologies by Multi Edit Software RELEASE! Put time back into your day.
Develop robust 32/64 bit document Multi-EditX is “The Solution” Your business depends on how you spend
imaging and recognition functionality into your time. You need to manage IT costs
for your editing needs with without losing time or performance. With
your applications with accurate and
high-speed multi-threaded Forms, OCR,
support for over 50 languages. proven cost-effective virtualization solutions
OMR, and 1D/2D barcode engines. Edit plain text, ANY Unicode, hex, from VMware, you can:
• Supports text, OMR, image, and XML, HTML, PHP, Java, Javascript, • Increase the productivity of your existing
VMware
barcode fields Perl and more! No more file size staff three times over
NEW • Auto-registration and clean-up to Advanced
limitations, unlimited line length, • Control downtime—whether planned Acceleration Kit
RELEASE! improve recognition results
• Provided as both high and low
any file, any size Multi-EditX is 1-49 Users or not for 6 processors
level interface “The Solution”! Paradise # • Save more than 50% on the cost of Paradise #
• Includes comprehensive confidence A30Z10101A01 managing, powering and cooling servers V55 78101A01
Paradise # reports to assess performance Pre-Order Your Copy and Save!
Make your time (and money) count for
L05 26301A01 223.20
$
more with virtualization from VMware.
$
9,234.99
$
3,214. 99
programmers.com/LEAD programmers.com/multiedit programmers.com/vSphere

“Pragma SSH for Windows” InstallShield Professional VMware ThinApp Windows


Best SSH/SFTP/SCP Servers for Windows 7 Migration Promotion
and Clients for Windows by Flexera Software by VMware
SAVE
by Pragma Systems If your software targets Windows®, Migration to the new Windows 7 OS is 75%!
Get all in one easy to use high performance InstallShield® is your solution. It makes it forcing companies to rethink their desktop
package. FIPS Certified and Certified for Windows. easy to author high-quality reliable Windows
delivery. VMware ThinApp is the easiest
Installer (MSI) and InstallScript installations
• Certified for Windows Server 2008R2 and most cost effective way to prepare for
and App-V™ virtual packages for Windows
• Compatible with Windows 7 platforms, including Windows 7. InstallShield, your Win 7 journey. By virtualizing your
Certi fie d • High-performance servers with applications first with ThinApp, you will Client Licenses
nd ow s the industry standard for MSI installations,
for Wi centralized management mitigate risk, dramatically speed up the Minimum 500 with
7/2008R2
also supports the latest Microsoft technologies Upgrade from
• Active Directory & GSSAPI authentication including Visual Studio 2010, .NET Active IS Pro + migration process—and save money Basic Level Support
• Supports over 1000 sessions Framework 4.0, IIS7.0, SQL Server 2008 IS Pro Silver Mtn when you migrate to Windows 7! Paradise #
Paradise # • Hyper-V and PowerShell support SP1, and Windows Server 2008 R2 and Paradise # V55 MIGRATION
P35 04201A01 • Runs in Windows 2008R2/2008/2003/ Windows Installer 5, keeping your customers I21 02301S01
7/ Vista/XP/2000 happy and your support costs down. LIMITED
$
16.99
$
550. 99
programmers.com/pragma programmers.com/flexera
$
1,399. 00
TIME OFFER! programmers.com/vmware

ActiveReports 6 TX Text Control 15.1


by GrapeCity Word Processing Components NEW BUILD ON
Integrate Business Intelligence/Reporting/Data TX Text Control is royalty-free, RELEASE! VMWARE ESXi
Analysis into your .NET applications using the robust and powerful word processing AND VSPHERE
NEW ActiveReports 6. software in reusable component form.
for Centralized Management,
• Fast and Flexible reporting engine • .NET WinForms control for VB.NET and C#
Continuous Application
• Data Visualization and Layout Controls such • ActiveX for VB6, Delphi, VBScript/HTML, ASP Availability, and Maximum
as Chart, Barcode and Table Cross Section • File formats DOCX, DOC, RTF, HTML, XML, TXT Operational Efficiency in Your
NEW Controls • PDF and PDF/A export, PDF text import Professional Edition
VERSION Virtualized Datacenter.
• Wide range of Export and Preview formats • Tables, headers & footers, text frames, Paradise #
6! Programmer’s Paradise invites you to take advantage
including Windows Forms Viewer, Web bullets, structured numbered lists, multiple T79 02101A02
of this webinar series sponsored by our TechXtend
Professional Ed. Viewer, Adobe Flash and PDF undo/redo, sections, merge fields, columns
Paradise # • Royalty-Free Licensing for Web and • Ready-to-use toolbars and dialog boxes
$
1,220. 99 solutions division.

D03 04301A01 Windows applications Download a demo today. FREE VIRTUALIZATION WEBINAR SERIES:
REGISTER TODAY! TechXtend.com/Webinars
$
1,310. 99
programmers.com/grapecity programmers.com/theimagingsource

CA ERwin® Data Modeler Intel Parallel Studio Microsoft Office NEW


r7.3 – Product Plus 1 Year by Intel Professional 2010 RELEASE!
Enterprise Maintenance Intel Parallel Studio is a comprehensive by Microsoft
by CA Windows parallelism toolset designed for Organize projects, manage finances and
Microsoft Visual Studio C/C++ developers. build a better way to do business with tools
CA ERwin Data Modeler is a data modeling
Parallel Studio is interoperable with the widely from Microsoft® Office Professional 2010.
solution that enables you to create and
used Microsoft Visual Studio, supports higher- This software suite includes 2010 versions
maintain databases, data warehouses and
level parallelism abstractions to simplify and of Word, Excel®, PowerPoint®, OneNote®,
enterprise data resource models. These models
speed development such as Intel Threading Outlook®, Publisher® and Access®. It offers
help you visualize data structures so that you Complete
Building Blocks and Open MP, is fully supported, Single User a Backstage™ view which replaces the
can effectively organize, manage and moderate DVD Windows
and provides an immediate opportunity to real- traditional File menu to give you one
data complexities, database technologies and Commercial 32/64 bit
ize the benefits of multicore platforms. Tools are go-to spot to conveniently save, open and
the deployment environment. Paradise # Paradise #
designed so novices can learn as they go, and print documents. Additionally, the server
professional developers can more easily bring I23 63101A04 M47 2130A01
Paradise # integration capabilities make it easier to
parallelism to existing and new projects. Create
P26 04201E01
optimized, innovative parallel applications and
$
753. 99 track. Besides this, the Office Professional
Plus 2010 also offers complete package
$
447. 99

3,931.99
$
programmers.com/ca compete in a multicore industry. programmers.com/intel through familiar intuitive tools. programmers.com/microsoft

866-719-1528 programmersparadise.com
Prices subject to change. Not responsible for typographical errors.

Untitled-3 1 6/8/10 3:49 PM


EDITOR’S NOTE KEITH WARD

Over-Educated, Yet Under-Qualified?

As we get going on the latest mini-bounce-back have greatly declined, and schools are trying to
on what looks like an extremely long road to eco- reverse the trend. This includes making the
nomic recovery, there is some good news: It looks like programs easier so there will be fewer drop-
the tech sector may have a quicker—and higher— outs and it will be more attractive to students who
bounce than other industries. We’re finally getting don’t want to work hard but still get a degree.”
some news that shows solid, sustained job growth “Woking,” a manager at a “Fortune 500 company,”
in all areas of IT, including software development. is similarly unimpressed. “I have never interviewed
But as Adrian Monk, my favorite TV detective, a candidate right out of college who I would hire. No
would say, “Here’s the thing …” Is this great news for you if you’re recent graduate that I have interviewed has had sufficient
hiring coders but can’t pay them a lot yet, which might mean pluck- understanding of real-world problems to be useful to me, at least
ing fresh fruit off the college tree? Because I’ve been reading some for the salary that the interviewees were expecting.”
worrisome stuff about the quality of education computer science Woking gives a specific example: “Several years ago I interviewed
grads are getting, and the heartburn it’s causing both the grads and candidates for an open position as a data modeler. None of the
potential employers. recent college graduates who had even covered Entity Relationship
My concern was piqued by an article I saw on the InfoWorld Diagramming in their programs had created a data model with more
Web site, “The sad standards of computer-related college degrees” than five entities.” Woking says that they have better success hiring
(bit.ly/blp267). A concerned father wrote in about his daughter’s lack candidates with three to five years work experience, even if the
of preparedness for the world of real work. He writes: “Imagine my applicant lacks a college degree. That’s a pretty damning statement.
surprise (and, as it turned out, her relief) that she could get a four- “Beney,” with 20-plus years experience and no IT degree, puts it
year undergraduate degree in “data processing” without having to succinctly: “Maybe if IT students had to actually write code rather
write a single program in any language! than manipulate IDEs, they’d at least be able to handle the real
“This seems to be a trend,” the writer continues. “In an effort world when they get out into the job market.”
to widen and deepen my own skill set, I have had occasion to Pretty discouraging stuff. What I’d like to do is use the power of
examine computer science course material available online from a the MSDN network to help determine if we’re facing a crisis when
number of top-tier colleges and some from the lower rungs. In most it comes to teaching college kids proper software development
instances, what I remember from my nearly 40-year-old computer skills. If you’re a computer science professor, recent computer
science education still places me far ahead of what they are now science graduate, hiring manager or anyone else with insight into
teaching.” And he concludes: “We’ve had trouble finding qualified this issue, let me know your thoughts at [email protected].
U.S. job applicants who want to do the work we need done. I If you agree that this is a general failing of the education system,
wonder if there’s a connection.” explain how you’d change things: What are the top two or three
The comments from readers accompanying the article support things you’d do? I’m looking forward to reading your responses. After
the writer’s contention, for the most part. Here’s a sampling: all, if there’s a job to be filled, it makes
From “rsr,” who claims to be a former computer science professor: sense that it be filled with a devel-
“Computer Science (and related computer program) enrollments oper who can actually do that job.

Visit us at msdn.microsoft.com/magazine. Questions, comments or suggestions for MSDN Magazine? Send them to the editor: [email protected].

© 2010 Microsoft Corporation. All rights reserved.


Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, you are not permitted to reproduce, store, or introduce into a retrieval system MSDN Magazine or any part of MSDN
Magazine. If you have purchased or have otherwise properly acquired a copy of MSDN Magazine in paper format, you are permitted to physically transfer this paper copy in unmodified form. Otherwise, you are not permitted to transmit
copies of MSDN Magazine (or any part of MSDN Magazine) in any form or by any means without the express written permission of Microsoft Corporation.
A listing of Microsoft Corporation trademarks can be found at microsoft.com/library/toolbar/3.0/trademarks/en-us.mspx. Other trademarks or trade names mentioned herein are the property of their respective owners.
MSDN Magazine is published by 1105 Media, Inc. 1105 Media, Inc. is an independent company not affiliated with Microsoft Corporation. Microsoft Corporation is solely responsible for the editorial contents of this magazine. The
recommendations and technical guidelines in MSDN Magazine are based on specific environments and configurations. These recommendations or guidelines may not apply to dissimilar configurations. Microsoft Corporation does not make
any representation or warranty, express or implied, with respect to any code or other information herein and disclaims any liability whatsoever for any use of such code or other information. MSDN Magazine, MSDN, and Microsoft logos are
used by 1105 Media, Inc. under license from owner.

4 msdn magazine
Untitled-1 1 3/10/10 2:49 PM
CUTTING EDGE DINO ESPOSITO

Expando Objects in C# 4.0

Most of the code written for the Microsoft The .NET Framework 4 introduces
.NET Framework is based on static typ- some new features that enable you to go
ing, even though .NET supports dynamic beyond static types. I covered the new
typing via reflection. Moreover, JScript dynamic keyword in the May 2010 issue
had a dynamic type system on top of (msdn.microsoft.com/magazine/ee336309). In
the .NET 10 years ago, as did Visual this article, I’ll explore the support
Basic. Static typing means that every for dynamically defined types such as
expression is of a known type. Types and expando objects and dynamic objects.
assignments are validated at compile time With dynamic objects, you can define the
and most of the possible typing errors are interface of the type programmatically
caught in advance. instead of reading it from a definition
The well-known exception is when you statically stored in some assemblies.
attempt a cast at run time, which may Dynamic objects wed the formal clean-
sometimes result in a dynamic error if liness of static typed objects with the
the source type is not compatible with flexibility of dynamic types.
the target type.
Static typing is great for performance Scenarios for Dynamic Objects
and for clarity, but it’s based on the assump- Dynamic objects are not here to replace the
tion that you know nearly everything good qualities of static types. Static types
about your code (and data) beforehand. are, and will remain for the foreseeable
Today, there’s a strong need for relaxing Figure 1 The Structure of a Dynamically future, at the foundation of software
this constraint a bit. Going beyond static Created Web Forms Class development. With static typing, you can
typing typically means looking at three find type errors reliably at compile time
distinct options: dynamic typing, dynamic objects, and indirect and produce code that, because of this, is free of runtime checks
or reflection-based programming. and runs faster. In addition, the need to pass the compile step leads
In .NET programming, reflection has been available since developers and architects to take care in the design of the software
the .NET Framework 1.0 and has been widely employed to fuel and in the definition of public interfaces for interacting layers.
special frameworks, like Inversion of Control (IoC) containers. These There are, however, situations in which you have relatively
frameworks work by resolving type dependencies at run time, thus well-structured blocks of data to be consumed programmatically.
enabling your code to work against an interface without having to Ideally, you’d love to have this data exposed through objects. But,
know the concrete type behind the object and its actual behavior. instead, whether it reaches you over a network connection or you
Using .NET reflection, you can implement forms of indirect read it from a disk file, you receive it as a plain stream of data.
programming where your code talks to an intermediate object You have two options to work against this data: using an indirect
that in turn dispatches calls to a fixed interface. You pass the name approach or using an ad hoc type.
of the member to invoke as a string, thus granting yourself the In the first case, you employ a generic API that acts as a proxy
flexibility of reading it from some external source. The interface and arranges queries and updates for you. In the second case, you
of the target object is fixed and immutable—there’s always a have a specific type that perfectly models the data you’re working
well-known interface behind any calls you place through reflection. with. The question is, who’s going to create such an ad hoc type?
Dynamic typing means that your compiled code ignores the static In some segments of the .NET Framework, you already have
structure of types that can be detected at compile time. In fact, good examples of internal modules creating ad hoc types for
dynamic typing delays any type checks until run time. The interface
you code against is still fixed and immutable, but the value you use Code download available at code.msdn.microsoft.com/mag201007CutEdge.
may return different interfaces at different times.

6 msdn magazine
Silverlight, .NET, WPF, WCF, WF, C API, C++ Class Lib, COM & more!
Multimedia
Develop your application Zith the same robust imaging technologies used by
Microsoft, HP, Sony, Canon, Kodak, GE, Siemens, the US Air Force and
Veterans Affairs Hospitals.

/EADTOO/S provides developers easy access to decades of expertise in


color, grayscale, document, medical, vector and multimedia imaging development.
Install /EADTOO/S to eliminate months of research and programming time Zhile
maintaining high levels of Tuality, performance and functionality.
Barcode
‡Image Formats & Compression: Supports 150+ image formats and
compressions including TIFF, EXIF, PDF, JPEG2000, JBIG and CCITT.
‡Display Controls: ActiveX, COM, Win Forms, Web Forms, WPF and Silverlight.
‡Image Processing: 200+ ¿lters, transforms, color conversion and draZing
functions supporting region of interest and extended grayscale data.
‡OCR/ICR/OMR: Full page or zonal recognition for multithreaded 32 and 64 bit
development.
‡Forms Recognition and Processing: Automatically identify forms and extract Form Recognition
user ¿lled data. & Processing
‡Barcode: Detect, read and Zrite 1D and 2D barcodes for multithreaded 32 and
64 bit development.
‡Document Cleanup/Preprocessing: DesNeZ, despecNle, hole punch, line and
border removal, inverted text correction and more.
‡PDF and PDF/A: 5ead and Zrite searchable PDF Zith text, images and
annotations.
‡Annotations: Interactive UI for document mark-up, redaction and image
measurement (including support for DICOM annotations). Vector
‡Medical Web Viewer Framework: Plug-in enabled frameZork to Tuickly
build high-Tuality, full-featured, Zeb-based medical image delivery and vieZer
applications.
‡Medical Image Viewer: +igh level display control Zith built-in tools for image
mark-up, ZindoZ level, measurement, zoompan, cine, and /UT manipulation.
‡DICOM: Full support for all IOD classes and modalities de¿ned in the 200
DICOM standard (including Encapsulated PDFCDA and 5aZ Data).
‡PACS Communications: Full support for DICOM messaging and secure Document
communication enabling Tuick implementation of any DICOM SCU and SCP
services.
‡JPIP: Client and Server components for interactive streaming of large images
and associated image data using the minimum possible bandZidth.
‡Scanning: TWAIN 2.0 and WIA (32 and 64-bit), autodetect optimum driver
settings for high speed scanning.
‡DVD: Play, create, convert and burn DVD images.
‡DVR: Pause, reZind and fast-forZard live capture and UDP or TCPIP streams. DICOM Medical
‡Multimedia: Capture, play, stream and convert MPEG, AVI, WMV, MP4, MP3,
OGG, ISO, DVD and more.
‡Enterprise Development: Includes WCF services and WF activities to create
scalable, robust enterprise applications.

+igK /evel Design ‡ /ow /evel Control

Free 60 Day Evaluation! www.leadtools.com/msdn 800 637-1840

Untitled-1 1 5/28/10 11:41 AM


Figure 2 Using LINQ-to-XML to Load Data into a Person Object proceed indirectly by querying the XML DOM or LINQ-to-XML
var persons = GetPersonsFromXml(file);
API, or use the same APIs to parse the raw data into ad hoc objects.
foreach(var p in persons) In the .NET Framework 4, dynamic objects offer an alternative,
Console.WriteLine(p.GetFullName());
simpler API to create types dynamically based on some raw data.
// Load XML data and copy into a list object As a quick example, consider the following XML string:
var doc = XDocument.Load(@"..\..\sample.xml"); <Persons>
public static IList<Person> GetPersonsFromXml(String file) { <Person>
var persons = new List<Person>(); <FirstName> Dino </FirstName>
<LastName> Esposito </LastName>
var doc = XDocument.Load(file); </Person>
var nodes = from node in doc.Root.Descendants("Person") <Person>
select node; <FirstName> John </FirstName>
<LastName> Smith </LastName>
foreach (var n in nodes) { </Person>
var person = new Person(); </Persons>
foreach (var child in n.Descendants()) {
if (child.Name == "FirstName") To transform that into a programmable type, in the .NET Frame-
person.FirstName = child.Value.Trim();
else
work 3.5 you’d probably use something like the code in Figure 2.
if (child.Name == "LastName") The code uses LINQ-to-XML to load raw content into an
person.LastName = child.Value.Trim(); instance of the Person class:
}
persons.Add(person); public class Person {
} public String FirstName { get; set; }
public String LastName { get; set; }
return persons; public String GetFullName() {
} return String.Format("{0}, {1}", LastName, FirstName);
}
}
specific blocks of data. One obvious example is ASP.NET Web Forms. The .NET Framework 4 offers a different API to achieve the same
When you place a request for an ASPX resource, the Web server thing. Centered on the new ExpandoObject class, this API is more
retrieves the content of the ASPX server file. That content is then direct to write and doesn’t require that you plan, write, debug, test and
loaded into a string to be processed into an HTML response. So you maintain a Person class. Let’s find out more about ExpandoObject.
have a relatively well-structured piece of text with which to work.
To do something with this data, you need to understand what
references you have to server controls, instantiate them properly
and link them together into a page. This can be definitely done
Dynamic typing delays any type
using an XML-based parser for each request. In doing so, though,
you end up paying the extra costs of the parser for every request,
checks until run time.
which is probably an unacceptable cost.
Due to these added costs of parsing data, the ASP.NET team Using the ExpandoObject Class
decided to introduce a one-time step to parse the markup into a Expando objects were not invented for the .NET Framework; in
class that can be dynamically compiled. The result is that a simple fact, they appeared several years before .NET. I first heard the term
chunk of markup like this is consumed via an ad hoc class derived used to describe JScript objects in the mid-1990s. An expando is a
from the code-behind class of the Web Forms page: sort of inflatable object whose structure is entirely defined at run
<html> time. In the .NET Framework 4, you use it as if it were a classic
<head runat="server">
<title></title> managed object except that its structure is not read out of any
</head> assembly, but is built entirely dynamically.
<body>
<form id="Form1" runat="server">
<asp:TextBox runat="server" ID="TextBox1" /> Figure 3 Using LINQ-to-XML to Load Data into an Expando Object
<asp:Button ID="Button1" runat="server" Text="Click" />
<hr /> public static IList<dynamic> GetExpandoFromXml(String file) {
<asp:Label runat="server" ID="Label1"></asp:Label> var persons = new List<dynamic>();
</form>
</body> var doc = XDocument.Load(file);
</html> var nodes = from node in doc.Root.Descendants("Person")
select node;
Figure 1 shows the runtime structure of the class created out of foreach (var n in nodes) {
the markup. The method names in gray refer to internal procedures dynamic person = new ExpandoObject();
foreach (var child in n.Descendants()) {
used to parse elements with the runat=server elements into var p = person as IDictionary<String, object>);
instances of server controls. p[child.Name] = child.Value.Trim();
}
You can apply this approach to nearly any situation in which
your application receives external data to process repeatedly. persons.Add(person);
}
For example, consider a stream of XML data that flows into the
application. There are several APIs available to deal with XML data, return persons;
}
from XML DOM to LINQ-to-XML. In any case, you have to either

8 msdn magazine Cutting Edge


Untitled-2 1 6/10/10 11:54 AM
At this point, to add a property
to the expando you simply assign
it a new value, as below:
expando.FirstName = "Dino";
It doesn’t matter that no infor-
mation exists about the FirstName
member, its type or its visibility. This
is dynamic code; for this reason, it
makes a huge difference if you use the
var keyword to assign an Expando-
Object instance to a variable:
Figure 4 Visual Studio 2010 IntelliSense and Expando Objects
var expando = new ExpandoObject();
An expando object is ideal to model dynamically changing infor- This code will compile and work just fine. However, with this defi-
mation such as the content of a configuration file. Let’s see how to use nition you’re not allowed to assign any value to a FirstName property.
the ExpandoObject class to store the content of the aforementioned The ExpandoObject class as defined in System.Core has no such mem-
XML document. The full source code is shown in Figure 3. ber. More precisely, the ExpandoObject class has no public members.
The function returns a list of dynamically defined objects. Using This is a key point. When the static type of an expando is
LINQ-to-XML, you parse out the nodes in the markup and create dynamic, the operations are bound as dynamic operations, including
an ExpandoObject instance for each of them. The name of each looking up members. When the static type is ExpandoObject, then
node below <Person> becomes a new property on the expando operations are bound as regular compile-time member lookups.
object. The value of the property is the inner text of the node. Based So the compiler knows that dynamic is a special type, but does not
on the XML content, you end up with an expando object with a know that ExpandoObject is a special type.
FirstName property set to Dino. In Figure 4, you see the Visual Studio 2010 IntelliSense options
In Figure 3, however, you see an indexer syntax used to populate when an expando object is declared as a dynamic type and when
the expando object. That requires a bit more explanation. it’s treated as a plain .NET object. In the latter case, IntelliSense
shows you the default System.Object members plus the list of
Inside the ExpandoObject Class extension methods for collection classes.
The ExpandoObject class belongs to the System.Dynamic namespace It should also be noted that some commercial tools in some
and is defined in the System.Core assembly. ExpandoObject circumstances go beyond this basic behavior. Figure 5 shows
represents an object whose members can be dynamically added ReSharper 5.0, which captures the list of members currently defined
and removed at run time. The class is sealed and implements a on the object. This doesn’t happen if members are added program-
number of interfaces: matically via an indexer.
public sealed class ExpandoObject :
IDynamicMetaObjectProvider,
To add a method to an expando object, you just define it as a
IDictionary<string, object>, property, except you use an Action<T> or Func<T> delegate to
ICollection<KeyValuePair<string, object>>,
IEnumerable<KeyValuePair<string, object>>,
express the behavior. Here’s an example:
IEnumerable, person.GetFullName = (Func<String>)(() => {
INotifyPropertyChanged; return String.Format("{0}, {1}",
person.LastName, person.FirstName);
As you can see, the class exposes its content using various enumer- });
able interfaces, including IDictionary<String, Object> and IEnumer- The method GetFullName returns a String obtained by combining
able. In addition, it also implements IDynamicMetaObjectProvider. the last name and first name properties assumed to be available on
This is the standard interface that enables an object to be shared within
the Dynamic Language Runtime (DLR) by programs written in
accordance with the DLR interoperability model. In other words,
only objects that implement the IDynamicMetaObjectProvider
interface can be shared across .NET dynamic languages. An expando
object can be passed to, say, an IronRuby component. You can’t do
that easily with a regular .NET managed object. Or, rather, you can,
but you just don’t get the dynamic behavior.
The ExpandoObject class also implements the INotifyProperty-
Changed interface. This enables the class to raise a Property-
Changed event when a member is added or modified. Support of the
INotifyPropertyChanged interface is key to using expando objects in
Silverlight and Windows Presentation Foundation application front ends.
You create an ExpandoObject instance as you do with any other .NET
object, except that the variable to store the instance is of type dynamic: Figure 5 The ReSharper 5.0 IntelliSense at Work with
dynamic expando = new ExpandoObject(); Expando Objects
10 msdn magazine Cutting Edge
Untitled-2 1 6/10/10 11:55 AM
through the console. Suppose the XML
file contains a section that describes the
expected UI—whatever that means in
your context. For the purpose of example,
here’s what I have:
<Settings>
<Output Format="{0}, {1}"
Params="LastName,FirstName" />
</Settings>
This information will be loaded into
another expando object using the follow-
ing code:
dynamic settings = new ExpandoObject();
settings.Format =
node.Attribute("Format").Value;
settings.Parameters =
node.Attribute("Params").Value;
The main procedure will have the fol-
lowing structure:
public static void Run(String file) {
dynamic settings = GetExpandoSettings(file);
dynamic persons = GetExpandoFromXml(file);
foreach (var p in persons) {
var memberNames =
(settings.Parameters as String).
Split(',');
var realValues =
GetValuesFromExpandoObject(p,
memberNames);
Console.WriteLine(settings.Format,
realValues);
}
}
The expando object contains the for-
mat of the output, plus the names
Figure 6 Two Sample Console Applications Driven by an XML File of t he me mb e rs w ho s e v a lu e s
are to be displayed. Given the person dy-
the expando object. If you attempt to access a missing member on namic object, you need to load the values for the specified mem-
expando objects, you’ll receive a RuntimeBinderException exception. bers, using code like this:
public static Object[] GetValuesFromExpandoObject(

XML-Driven Programs IDictionary<String, Object> person,


String[] memberNames) {
To tie together the concepts I’ve shown you so far, let me guide you
var realValues = new List<Object>();
through an example where the structure of the data and the structure foreach (var m in memberNames)
of the UI are defined in an XML file. The content of the file is parsed realValues.Add(person[m]);
return realValues.ToArray();
to a collection of expando objects and processed by the application. }
The application, however, works only with dynamically presented Because an expando object implements IDictionary<String,
information and is not bound to any static type. Object>, you can use the indexer API to get and set values.
The code in Figure 3 defines a list of dynamically defined Finally, the list of values retrieved from the expando object are
person expando objects. As you’d expect, if you add a new node to passed to the console for actual display. Figure 6 shows two screens
the XML schema, a new property will be created in the expando for the sample console application, whose only difference is the
object. If you need to read the name of the member from an external structure of the underlying XML file.
source, you should employ the indexer API to add it to the expan- Admittedly, this is a trivial example, but the mechanics required
do. The ExpandoObject class implements the IDictionary<String, to make it work are similar to that of more interesting examples.
Object> interface explicitly. This means you need to segregate the Try it out and share your feedback! „
ExpandoObject interface from the dictionary type in order to use
the indexer API or the Add method:
DINO ESPOSITO is the author of “Programming ASP.NET MVC” from Microsoft
(person as IDictionary<String, Object>)[child.Name] = child.Value;
Press and coauthor of “Microsoft .NET: Architecting Applications for the Enterprise”
Because of this behavior, you just need to edit the XML file to (Microsoft Press, 2008). Based in Italy, Esposito is a frequent speaker at industry
make a different data set available. But how can you consume this events worldwide. You can join his blog at weblogs.asp.net/despos.
dynamically changing data? Your UI will need to be flexible enough
to receive a variable set of data. THANKS to the following technical expert for reviewing this article:
Let’s consider a simple example where all you do is display data Eric Lippert

12 msdn magazine Cutting Edge


Untitled-2 1 6/10/10 11:56 AM
Untitled-2 2 3/2/10 10:44 AM
Untitled-2 3 3/2/10 10:45 AM
DATA POINTS JULIE LERMAN

Windows Azure Table Storage—


Not Your Father’s Database
Windows Azure Table storage causes a lot of head scratching your table over multiple servers. A table doesn’t have a specified
among developers. Most of their experience with data storage is schema. It’s simply a structured container of rows (or entities) that
with relational databases that have various tables, each containing doesn’t care what a row looks like. You can have a table that stores
a predefined set of columns, one or more of which are typically one particular type, but you can also store rows with varying struc-
designated as identity keys. Tables use these keys to define relation- tures in a single table, as shown in Figure 1.
ships among one another.
Windows Azure stores information a few ways, but the two that It All Begins with Your Domain Classes
focus on persisting structured data are SQL Azure and Windows Our typical development procedure with databases is to create them,
Azure Table storage. The first is a relational database and aligns fairly define tables in them and then, for every table, define a particular
closely with SQL Server. It has tables with defined schema, keys, structure—specific columns, each with a specified data type—as
relationships and other constraints, and you connect to it using a well as relationships to other tables. Our applications then push
connection string just as you do with SQL Server and other databases. data into and pull data out of the tables.
Windows Azure Table storage, on the other hand, seems a With Windows Azure Table services, though, you don’t design a
bit mysterious to those of us who are so used to working with database, just your classes. You define your classes and a container
relational databases. While you’ll find many excellent walk-throughs (table) that one or more classes belong to, then you can save
for creating apps that use Windows Azure Table storage, many instantiated objects back to the store as rows.
developers still find themselves forced to make leaps of faith In addition to the properties you need in your classes, each class
without truly understanding what it’s all about. must have three properties that are critical in determining how
This column will help those stuck in relational mode bridge that Windows Azure Table services do their job: PartitionKey, RowKey
leap of faith with solid ground by explaining some core concepts and TimeStamp. PartitionKey and RowKey are both strings,
of Windows Azure Table storage from the perspective of relational and there’s an art (or perhaps a science) to defining them so you
thinking. Also, I’ll touch on some of the important strategies for get the best balance of query and transaction efficiency along
designing the tables, depending on how you expect to query and with scalability at run time. For a good understanding of how to
update your data. define PartitionKeys and RowKeys for the most benefit, I highly
recommend the PDC09 session “Windows Azure Tables and

Windows Azure Table storage


Queues Deep Dive,” presented by Jai Haridas, which you can watch
at microsoftpdc.com/sessions/svc09.

seems a bit mysterious to those PartitionKeys and RowKeys


of us who are so used to working Drive Performance and Scalability
Many developers are used to a system of primary keys, foreign keys
with relational databases. and constraints between the two. With Windows Azure Table stor-
age, you have to let go of these concepts or you’ll have difficulty
grasping its system of keys.
Storing Data for Efficient Retrieval and Persistence In Windows Azure Tables, the string PartitionKey and RowKey
By design, Windows Azure Table services provides the potential to properties work together as an index for your table, so when defining
store enormous amounts of data, while enabling efficient access and them, you must consider how your data is queried. Together, the
persistence. The services simplify storage, saving you from jumping properties also provide for uniqueness, acting as a primary key for
through all the hoops required to work with a relational database— the row. Each entity in a table must have a unique PartitionKey/
constraints, views, indices, relationships and stored procedures. You RowKey combination.
just deal with data, data, data. Windows Azure Tables use keys that
enable efficient querying, and you can employ one—the PartitionKey— Code download available at code.msdn.microsoft.com/mag201007DataPoints.
for load balancing when the table service decides it’s time to spread
16 msdn magazine
Uncover
Web development
simplicity with the
e
complete set of tools
ols
from Altova®

Experience how the Altova MissionKit® suite of


integrated XML and database tools can simplify even
the most advanced Web 2.0 development projects.

The Altova MissionKit includes powerful


Web development tools:
StyleVision® – graphical stylesheet & electronic forms designer
• Drag-and-drop XSLT 1.0/2.0 and XSL-FO stylesheet design
• Advanced CSS and JavaScript functionality
• Precise electronic forms design
XMLS ® – advanced XML editor
XMLSpy
: • XSLT
XS 1.0/2.0 editing, debugging, and profiling
o n 2010
ers i XSLT editing help, support for program code
•X
e w i n V on eatin
or cr rms
g
N it v e r s i
dig m f
c fo
in stylesheets
• 64
-b para troni
d e sign nd elec gn DiffDog® – XML-aware diff / merge utility
D
w
• Ne sheets
a desi
y l e c f o rms oning
st i ti
ctron posi • File, folder, directory, DB comparison and merging
T r u e ele absolute e rsion

r o u gh & c onv rt • One-click directory syncing
t h
editi
ng uppo
S O N S e r ver s
•J oint
®

areP And more…


• Sh more
d m uch
• An

Download a 30 day free trial!

Try before you buy with a free,


fully functional trial from www.altova.com.

Untitled-1 1 5/24/10 3:17 PM


A big difference between querying OData (returned by WCF
FirstName
Data Services) and querying against Windows Azure Tables is that
Contact
string functions are not supported. If you want to search part of a
LastName
string, you must use String.CompareTo to inspect the beginning
Contact Table characters of the string. If you want to query the entire Vegetable
Street category, however, you can use the CompareTo method to do a
prefix search over the start of the PartitionKey:
Address City var query = _serviceContext.FoodTable.AsTableServiceQuery()
.Where(c => c.PartitionKey.CompareTo("Vegetable")>=0
&& c.PartitionKey.CompareTo("Vegetablf")<0
&& c.Color == "Green");
Region
This would limit the search to only partitions that begin with
Vegetable—nothing less and nothing more. (Using Vegetablf rather
Name
than Vegetable in the second predicate defines the upper bound,
Product Table Product
preventing foods in partitions such as Yogurt or VegetableLike
Price
from being returned.) In the code sample accompanying this
article, you’ll see how I’ve done this replacement dynamically.
Figure 1 A Single Windows Azure Table Can Contain Rows
Representing Similar or Different Entities Parallel Querying for Full Table Scans
But you need to consider more than querying when defining What if you were searching for all green food, regardless of
a PartitionKey, because it’s also used for physically partitioning type? Windows Azure would have to scan through the entire
the tables, which provides for load balancing and scalability. For table. If it’s a large table, Windows Azure throws in another
example, consider a table that contains information about food wrench: It can return only 1,000 rows at a time (or process for 5
and has PartitionKeys that correspond to the food types, such as seconds). Windows Azure will return those results along with a
Vegetable, Fruit and Grain. In the summer, the rows in the continuation key, then go back for more. This can be a tedious
Vegetable partition might be very busy (becoming a so-called synchronous process.
“hot” partition). The service can load balance the Food Instead you could execute a number of queries, perhaps iterating
table by moving the Vegetable partition to a different server to through a known list of categories, then building each query:
_serviceContext.FoodTable.AsTableServiceQuery()
better handle the many requests made to the partition. .Where(c => c.PartitionKey == _category && c.Color == "Green");
If you anticipate more activity on that partition than a single Then you can send off all the queries to run in parallel.
server can handle, you should consider creating more-granular
partitions such as Vegetable_Root and Vegetable_Squash. This is More Design Considerations for Querying
because the unit of granularity for load balancing is the PartitionKey. The RowKey property serves a number of purposes. In combination
All the rows with the same PartitionKey value are kept together with PartitionKey, it can define uniqueness within a table for
when load balancing. You could even design your table so that each row. For example, I know another Julie Lerman (truly I do).
every single entity in the table has a different partition. So the RowKey will be critical in differentiating us when we share
a PartitionKey of lerman_julie. You can also use RowKey to help
with sorting, because it acts as part of an index. So then, what
Windows Azure Tables live would be useful RowKeys for Julie Lerman the elder (that’s me) and
Julie Lerman the younger? A GUID will certainly do the trick for
in the cloud, but for me they identity, but it does nothing for searches or sorting. In this case, a
combination of values would probably be best.
began in a fog. What else differentiates us? We live on opposite ends of the United
States, but locations can change so that’s not useful for a key. Certainly
our date of birth is different (by more than 20 years) and that’s a static
Digging Deeper into PartitionKeys and Querying value. But there’s always the chance that another Julie Lerman
Notice that when I suggested fine-tuning the Vegetable Partition- with my birth date exists somewhere in the world and could land
Keys, I placed Vegetable at the beginning of the key, not the end. in my database—highly implausible, but not impossible. After all
That’s another mechanism for enabling more efficient queries. of the deliberation I might go through, birth date may still not be
Queries to Windows Azure Tables from the Microsoft .NET Frame- a value on which my application is searching or sorting. So in this
work use LINQ to REST and a context that derives from the WCF Data case, RowKey might not be part of queries, and a plain-old GUID
Services System.Data.Services.Client.DataServiceContext. If you want would suffice. You’ll have to make these kinds of decisions for all
to find any green squash, you can search in the Vegetable_Squash of your Windows Azure Tables.
partition without wasting resources to search the entire table: There’s much more to learn about defining keys, and factors
var query = _serviceContext.FoodTable.AsTableServiceQuery()
such as retrieving data, storing data, scalability and load balancing
.Where(c => c.PartitionKey=="Vegetable_Squash"&& c.Color == "Green"); all come into play.
18 msdn magazine Data Points
Rethinking Relationships
In a relational database, we rely on foreign keys and constraints to
define relationships. We certainly could define a foreign key prop-
erty in a class that refers to another class, but there’s nothing in
Windows Azure Table storage to enforce relationships. Your code
will still be responsible for that.
This impacts how you perform queries and updates (including
inserts and deletes) from tables.
When querying, you can’t perform joins across tables. And when
persisting data, you can’t have transacted commands that span
partitions or tables. There is, however, a mechanism for working
with data in graphs, which is something I pointed out at the
beginning of this column—you can store rows of varying schemas
in a single table.
If your application requires that users work with contacts and
addresses together, you could store the addresses in the same table
as the contacts. It would be critical to ensure that the addresses
have the same PartitionKey—for example, “lerman_julie.” Also,
the RowKey should contain a value that specifies the type or kind
of entity, such as “address_12345,” so you can easily differentiate
between contact types and address types when querying.
The common PartitionKey ensures that the rows will always
stay together to take advantage of a feature called Entity Group
Transactions (EGT). This allows a single transaction to carry out
operations atomically across multiple entities as long as all the
entities have the same PartitionKey value. One of the benefits of
EGT with respect to related data is that you can perform a trans-
acted update on all the entities in a single transaction.

A Base of Understanding from Which to Learn More


Windows Azure Tables live in the cloud, but for me they began in
a fog. I had a lot of trouble getting my head wrapped around them
because of my preconceived understanding of relational databases.
I did a lot of work (and pestered a lot of people) to enable myself
to let go of the RDBMS anchors so I could embrace and truly
appreciate the beauty of Windows Azure Tables. I hope my journey
will make yours shorter.
There’s so much more to learn about Windows Azure Table
services. The team at Microsoft has some great guidance in place on
MSDN. In addition to the PDC09 video mentioned earlier, check
this resource page on the Windows Azure Storage team blog at
blogs.msdn.com/windowsazurestorage/archive/2010/03/28/windows-azure-storage-
resources. The team continues to add detailed, informative posts to the
blog, and I know that in time, or even by the time this column is pub-
lished, I’ll find answers to my myriad questions. I’m looking forward to
providing some concrete examples in a future Data Points column. „

J ULIE L ERMAN is a Microsoft MVP, .NET mentor and consultant who lives
in the hills of Vermont. You can find her presenting on data access and other
Microsoft .NET topics at user groups and conferences around the world. Lerman
blogs at thedatafarm.com/blog and is the author of the highly acclaimed book,
“Programming Entity Framework” (O’Reilly Media, 2009). Follow her on
Twitter.com: julielerman.

THANKS to the following technical experts for reviewing this article:


Brad Calder and Jai Haridas
msdnmagazine.com July 2010 19
OFFICE ADD-INS

3 Solutions for
Accessing SharePoint
Data in Office 2010
Donovan Follette and Paul Stubbs

Millions of people use the Microsoft Office client applications via SharePoint Workspace 2010 (formerly known as Groove),
in support of their daily work to communicate, scrub data, crunch direct synchronization between SharePoint and Outlook, the new
numbers, author documents, deliver presentations and make busi- SharePoint REST API and the new client object model. Just as in
ness decisions. In ever-increasing numbers, many are interacting Microsoft Office SharePoint Server (MOSS) 2007, a broad array of
with Microsoft SharePoint as a portal for collaboration and as a Web services is available in SharePoint 2010 for use as well.
platform for accessing shared data and services. In this article, we’ll describe a couple of no-code solutions and
Some developers in the enterprise have not yet taken advantage show you how to build a few more-complex solutions using these
of the opportunity to build custom functionality into Office new features in SharePoint 2010.
applications—functionality that can provide a seamless, integrated
experience for users to directly access SharePoint data from within External Data Sources
familiar productivity applications. For enterprises looking at ways Let’s start by taking a quick look at the SharePoint list types you
to improve end-user productivity, making SharePoint data available can employ as data sources.
directly within Office applications is a significant option to consider. One particularly useful data source is an external list that displays
With the release of SharePoint 2010, there are a number of new data retrieved via a connection to a line-of-business (LOB) system.
ways available to access SharePoint data and present it to the Office MOSS 2007 let you to connect to LOB data using the Business
user. These range from virtually no-code solutions made possible Data Catalog (BDC), which provided read-only access to back-end
systems. SharePoint 2010 provides Business Connectivity Services
This article discusses: (BCS), which is an evolution of the BDC that supports full read/
write access to your LOB data.
• Using external data sources
Why would you want to bring LOB data into SharePoint?
• Building a Word add-in
Consider the use case where you have a customer relationship
• Using the client object model management (CRM) system that only a limited number of people
• Web services as social services in the organization can access directly. However, there’s a customer
Technologies discussed: table in the database with name and address data that could be
Office 2010, SharePoint 2010, Windows Communication Foundation
used by many others if it were available. In real-life, you prob-
ably end up with users copying this information from various

20 msdn magazine
non-authoritative sources and pasting it into
their Office documents. It would be better
to access this customer data from the au-
thoritative CRM system and expose it in
SharePoint as an external list that Office
clients can access.
SharePoint Designer 2010 is the tool
used for configuring access to a LOB
system and making its data available in a
SharePoint external list. There are a couple steps
required to do this.
The first step is to create a new External
Content Type (ECT). The ECT contains
metadata describing the structure of the
back-end data, such as the fields and CRUD
methods that SharePoint will use to interact
with it. Once the ECT has been created, an
external list can be generated from it on
any site within SharePoint. External lists
look and act like any other standard list Figure 1 ECT Configuration for Accessing External CRM Data
in SharePoint, but the external list data is
not stored in SharePoint. Instead, it’s retrieved via the ECT when operations to read-only. In that case, you can simply select the Read
accessed by an end user. List and Read Item operations during configuration. These are the
SharePoint Designer includes default support for connecting to only two operations required to create an ECT.
external data sources including SQL Server, Windows Commu- Once the ECT is created, it’s a simple step to create an external
nication Foundation (WCF) and the Microsoft .NET Framework. list from it. You can do this by creating a new external list from
Therefore, an ECT can be easily created for connecting to any SQL within SharePoint or SharePoint Designer.
Server database table or view, WCF service or Web service. Custom
.NET solutions can be built in Visual Studio 2010 using the new SharePoint Standard Lists
SharePoint 2010 Business Data Connectivity Model project template. Of course, you can employ standard SharePoint lists to display
For the purposes of this article, the SQL Server data source type business data. For example, say your department manages training-
was used to create an ECT for a database table. Then the ECT course content. You maintain two SharePoint lists: Course Cate-
was used to create an External List. Figure 1 shows the resulting gory and Course. These lists contain the course information that
“Customers From CRM” ECT after completing the configuration employees on other teams use to create customer correspondence,
in SharePoint Designer. brochures or advertising campaigns. So the data is maintained by
a small team, but must be readily available for use by many people

SharePoint Designer 2010


across the company.
SharePoint 2010 has a new capability whereby lookups form relation-

is the tool used for configuring


ships between lists. When creating a new column on a list, one of the
options is to make the column a lookup type, then indicate another

access to a LOB system.


list within the site as its source. SharePoint supports single-value
lookups for one-to-many relationships or multi-value lookups for
many-to-many relationships. If you choose, SharePoint will also
There are a couple things to call out here. First, notice in the maintain referential integrity between the lists supporting restricted
External Content Type Information panel that the Office Item Type or cascading deletes. This provides a number of options in how
property value is set to Contact. During the configuration process, you set up and use lists in SharePoint.
you can map the external data fields to a corresponding Office item Going back to our example, you could easily create a Course list
type like Contact. This isn’t a requirement, but because the name lookup column named Category that’s sourced from the Course
and address data from the CRM database can be mapped nicely to Category list as shown in Figure 2.
an Outlook Contact, this designation was chosen. You’ll be able to
use the result of this configuration option in Outlook later. Bringing SharePoint List Data to Office
Second, notice in the External Content Type Operations panel So far, we’ve looked at how to surface external data as SharePoint
that full CRUD methods have been enabled for this ECT. This was lists using the new BCS features in SharePoint 2010. Users can
due to the selections made in the configuration wizard. However, access the data via the browser on a PC or a mobile device, but
there certainly may be business reasons to limit the LOB system users will probably appreciate the rich experience of the full Office
msdnmagazine.com July 2010 21
will synchronize this external list directly
to Outlook. Again, no code required, with
SharePoint data landing in the Office client.

Using the REST API


No-code solutions, such as those enabled
through SharePoint Workspaces and
Outlook list connectivity, are great, but there
are some user experiences that require a
more-customized solution. To accommo-
date these, we need to provide access to the
list data in the Office applications in a way
that permits us to further tailor the solution.
Possibly one of the easiest ways for a devel-
oper to access SharePoint list and document
library data is via the new REST API (listdata.
svc). Most of the data in SharePoint is exposed
as a RESTful endpoint. The standard location
for SharePoint services is _vti_bin, so if you
Figure 2 Using a Lookup List to Source Course Category Data simply type into your browser the URL to
your site and append /_vti_bin.listdata.svc, you
client application. Let’s now turn our attention to using the Share- will get back a standard ATOM services document that describes the
Point list data on the client in two ways. First, we’ll see how you collections available on the site (see Figure 5).
can access data without writing any code by employing SharePoint Notice that the Course and CourseCategory lists are present.
Workspace and Outlook. By further appending /Course to the URL, you can retrieve all the
When developing our sample CRM solution, there are two courses in the list or you can retrieve any one specific course by
Connect & Export buttons in the SharePoint ribbon for the external appending a number. For example, this will return the third course:
customers list: Sync to SharePoint Workspace and Connect to http://intranet.contoso.com/sites/SPC/_vti_bin/listdata.svc/Course(3)

Outlook (see Figure 3). If SharePoint Workspace 2010 is installed You can do more advanced queries by appending the follow-
on the client computer, Sync to SharePoint Workspace lets you ing property filter:
synchronize lists and document libraries to the client with a single ?$filter=startswith(propertyname,'value')

click. A local cached copy of the content is then available to the user But an advanced query that’s important here is one that can return
in SharePoint Workspace whether the user is online or offline. When the Courses with their associated CourseCategory data. By appending
the user is in an offline state and modifies a list item or document the following to the site URL, you can retrieve the combined structure
and saves it locally, the list item or document will be synchronized of Course and CourseCategory in a single payload:
with SharePoint automatically when the user is back online again. /_vti_bin.listdata.svc/Course?$expand=Category
You’ll see this implemented in a Word add-in in the next section.

You can employ standard Building a Word Add-In


SharePoint lists to display Once you know how to leverage the REST APIs to acquire access to
the data, you can surface the data in the client applications where

business data. users have a rich authoring experience. For this example, we’ll build
a Word add-in and present this data to the user in a meaningful
way. This application will have a dropdown list for the course
This is a no-code-required solution. Data is made accessible in categories, a listbox that loads with courses corresponding to the
the SharePoint Workspace client application shown in Figure 4. category selection and a button to insert text about the course into
And because full CRUD methods were defined in the ECT, any the Word document.
changes made to the customer data in SharePoint Workspace will In Visual Studio 2010, create a new Office 2010 Word add-in
be updated in the CRM database as well. project in C#.
Because we mapped the CRM database
fields to the Contact Office item type during
ECT configuration, SharePoint can provide
our external list data to Outlook as native
Contact Items. By clicking the Connect to
Outlook button on the ribbon, SharePoint Figure 3 Connect & Export Options in the SharePoint Ribbon

22 msdn magazine Office Add-Ins


d
WPF Gri
Fast
Data
Char
t

Silverlight Grid

ASP
.NET
Grid
At Infragistics, we make sure our NetAdvantage for
.NET controls make every part of your User Interface
the very best it can be. That’s why we’ve tested and
re-tested to make sure our Data Grids are the very
fastest grids on the market and our Data Charts
outperform any you’ve ever experienced. Use our
controls and not only will you get the fastest load
times, but your apps will always look good too. Fast
and good-looking…that’s a killer app. Try them for
yourself at infragistics.com/wow.

Infragistics Sales 800 231 8588


Infragistics Europe Sales +44 (0) 800 298 9055
Infragistics India +91-80-6785-1111
twitter.com/infragistics Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics and the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc.

Untitled-12 1 4/9/10 2:27 PM


For the UI, you will create a custom task
pane, which provides a UI in Office applica-
tions that can be docked on the top, bottom,
left or right of the application. Task panes can
have Windows Forms controls added to them,
including the Windows Presentation Founda-
tion (WPF) user control that will be used here.
Add a WPF user control to the project
using the Add New Item dialog and name
it CoursePicker. When the designer opens,
replace the Grid element with the XAML snip-
pet shown in Figure 6. This simply adds the
ComboBox, Button and ListBox and sets some
properties. You will add a couple events later.
Open the CoursePicker.xaml.cs file. Imme-
diately following the namespace, you’ll add two
using statements, one for your service reference,
ServiceReference1, and one for System.Net:
namespace Conf_DS {
using ServiceReference1;
using System.Net;
Figure 4 Accessing External List Data in a SharePoint Workspace In the CoursePicker Class, the first order
of business is to instantiate the data context
Now add a new service data source. On the Add Service Reference object. Here, you pass in the URL to your site, again appended by
panel in the wizard, enter the URL for your SharePoint site and the _vti_bin/listdata.svc designation:
append /_vti_bin/listdata.svc to it. For example: public partial class CoursePicker : UserControl {
Office2010DemoDataContext dc = new Office2010DemoDataContext(
http://intranet.contoso.com/_vti_bin/listdata.svc new Uri("http://intranet.contoso.com/sites/spc/_vti_bin/listdata.svc"));
After entering the URL, click Go. This retrieves the metadata Next you’ll have a List class-level variable to cache the retrieved
for the SharePoint site. When you click OK, WCF Data Services course items and save round-trips to the server:
will generate strongly typed classes for you by using the Entity List<CourseItem> courses = null;
Framework. This completely abstracts away the fact that the data The code to retrieve the Courses and CourseCategory data is
source is SharePoint or an OData producer that provides data via in the OnInitialized override method. First, you designate your
the Open Data Protocol. From this point forward, you simply work logged-in credentials to pass to the server. Then the course categories
with the data as familiar .NET classes. are retrieved via the data context object and bound to the category
ComboBox. Finally, using the expand option,
courses are returned with their associated
category and loaded into the courses list
object. This will cache the courses locally
for better performance:
protected override void OnInitialized(EventArgs e) {
dc.Credentials = CredentialCache.
DefaultCredentials;

// Load Category dropdown list


cboCategoryLookup.DataContext =
dc.CourseCategory;
cboCategoryLookup.SelectedIndex = 0;

// To cache data locally for courses


// Expand to retrieve the Category as well.
courses = dc.Course.Expand("Category").ToList();

base.OnInitialized(e);
}
Now you need to add a couple events.
Return to the CoursePicker designer and
double-click the button to create the button
click event. Next, click on the ComboBox
and in the properties menu, click the Events
Figure 5 ATOM Services Document tab and double-click the SelectionChanged

24 msdn magazine Office Add-Ins


WINDOWS FORMS / WPF / ASP.NET / ACTIVEX

WORD PROCESSING
COMPONENTS ( WHAT YOU SEE IS WHAT YOU GET )

Word Processing Components US +1 877 - 462 - 4772 (toll-free)


for Windows Forms & ASP.NET www.textcontrol.com EU +49 421 - 4270671 - 0

Untitled-8 1 6/3/10 1:04 PM


event. Add code to your SelectionChanged event handler so it Figure 6 Word Add-In UI Markup
looks like this: <StackPanel>
private void cboCategoryLookup_SelectionChanged( <ComboBox
object sender, SelectionChangedEventArgs e) { Name="cboCategoryLookup" Width="180" Margin="5"
HorizontalAlignment="Center" IsEditable="False"
courseListBox.DataContext = ItemsSource="{Binding}"
from c in courses DisplayMemberPath="CategoryName"
where c.Category.CategoryName == SelectedValuePath="CategoryName" />
cboCategoryLookup.SelectedValue.ToString() <Button Name="button1"
orderby c.CourseID Content="Insert Course Information" Margin="5" />
select c; <ListBox Name="courseListBox" ItemsSource="{Binding}">
} <ListBox.ItemTemplate>
<DataTemplate>
Here, a simple LINQ query searches the courses list object (the <StackPanel>
one loaded with data retrieved using the expand option) to find <StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Path=CourseID}"
all the courses that have a category name that matches the name of FontWeight="Bold" />
the course category selected in the ComboBox. It also orders the <TextBlock Text=": " FontWeight="Bold" />
<TextBlock Text="{Binding Path=Name}" />
results to provide a clean user experience. </StackPanel>
Finally, add code to the button event handler to cast the selected <TextBlock Text="{Binding Path=Description}"
Margin="5 0 0 0" />
listbox item into a CourseItem object. Then you take the various </StackPanel>
data elements you want to present to the user and place them in </DataTemplate>
</ListBox.ItemTemplate>
the document at the location of the insertion point: </ListBox>
private void button1_Click( </StackPanel>
object sender, RoutedEventArgs e) {

CourseItem course = so the task pane for this add-in will just be one in the collection.
(CourseItem)courseListBox.SelectedItem;
Globals.ThisAddIn.Application.Selection.InsertAfter( Figure 7 shows the completed add-in.
String.Format("{0}: {1} \n{2}\n", course.CourseID, With the data appearing in the Office application, you can
course.Name, course.Description));
} take the solution further by adding code that interacts with the
And that’s it—really simple code for accessing the data in Word APIs. For example, you can add code so that when a user
SharePoint via WCF Data Services. selects a course, the information is inserted and formatted in
Now open the ThisAddIn.cs file. This is the main entry point the document. The Office application APIs are rich and allow
for all add-ins for Office. Here you add the code to instantiate you to add more features to your custom solution that can
the task pane: make users even more productive. Next, we’ll see an example
private void ThisAddIn_Startup(
of this with Word content controls connected to a client-side
object sender, System.EventArgs e) { SharePoint object model.
UserControl wpfHost = new UserControl();
ElementHost host = new ElementHost(); Using the Client Object Model
host.Dock = DockStyle.Fill;
host.Child = new CoursePicker();
Using the REST APIs to gain access to the data is one among a few
wpfHost.Controls.Add(host); options available to you. For example, there are also three new APIs
CustomTaskPanes.Add(
wpfHost, "Training Courses").Visible = true;
available for SharePoint 2010 that provide a consistent program-
} ming model across the JavaScript, .NET managed applications and
Silverlight clients. These three client object models interact with

One of the easiest ways


SharePoint using a subset of the server object model capabilities
and essentially interoperate with SharePoint at the site collection

to access SharePoint list and


level and below: webs, lists, listitems, content types, fields and
external lists. If you’re familiar with the server object model, you’ll

document library data is via


be familiar with the client object model.
To demonstrate using the client object model, we’ll use the exter-

the new REST API.


nal list containing the CRM customers to build a document-level
Word add-in where the action pane is loaded with the customers.
This is a case where you’ll need to use the client object model
The CoursePicker WPF user control can’t be directly added to because the List Data Service doesn’t provide access to external
the custom task pane objects collection. It must be hosted in an lists. In this example, the user can select a customer and insert his
ElementHost control, which provides the bridge between WPF con- name and address information into content controls in a quote
trols and Windows Forms controls. Notice that the CoursePicker document template.
object is added as a child of the ElementHost object and then The previous Course and Category example was an application-
the ElementHost object is added to the custom task pane object level add-in. An application-level Word add-in will be present every
collection. An Office application can have more than one custom time Word is started. Document-level add-ins, however, are bound
task pane installed and available to the user at any given time, to a document and will only load if a document of a certain type is
26 msdn magazine Office Add-Ins
Windows Small Business Server 2008
Enhance productivity, access your business desktop virtually anytime, anywhere.

Windows Small Business Server 2008 delivers a range of features and capabilities for small
businesses. Business owners and employees will benefit from built-in antivirus and anti-
spam protection, integration with Microsoft®
Office Live Small Business and Windows SharePoint® Services 3.0, and support for Windows
Mobile® devices. IT managers and technology consultants will appreciate more flexible and
costeffective licensing, a more secure infrastructure, and being able to run Microsoft SQL
Server® 2008 Standard Edition for Small Business and Windows Server 2008 Standard
technologies on a second hardware server (with SBS Premium Edition).

Designed for Small Business Line-of-Business Application Platform

Windows Server 2008 Standard Technologies Includes everything from SBS 2008 Standard, plus:
Microsoft Exchange Server 2007 Standard Edition Windows Server 2008 Standard 4
Windows SharePoint Services 3.0 Microsoft SQL Server 2008 Standard for
Windows Server Update Services 3.0 Small Business 5
Microsoft ForefrontTM Security for Exchange Server 1, 3
Integration with Office Live Small Business 2

Standard
Premium

Save up to $100 on Windows Server Solutions!


Intel® SR1630HGP 1U Barebone Server Intel® SR1630HGP 1U Barebone Server
Microsoft Windows Server Microsoft Windows Small Business Server
Standard 2008 32Bit/x64 2008 SP2 Standard Edition

Your Price: $1079.98 Your Price: $1159.98


Combo Discount: $100 Combo Discount: $100

Go to:
www.neweggbusiness.com/msserver

ONCE YOU KNOW, YOU NEWEGG.® business


SM

Follow us on:
©2000-2010 Newegg Inc. All rights reserved. Newegg is not responsible for errors
and reserves the right to cancel orders arising from such errors. All third party logos are the ownership of the respective owner.

Untitled-1 1 6/11/10 10:39 AM


opened. In this case, the external customers list the Client Context. Then you can start creating
will only be presented to the user when working SharePoint objects using the following steps:
on a quote document. 1. Create a site
In Visual Studio, start by creating a new Word 2. Create a collection of site lists
2010 document project. In the wizard, you’ll 3. Get a list with a specific name
need to select either a default document or a 4. Get all the items in the list
document that you’ve already saved. In my case, 5. Load the query into the context
I used a quote document I had already saved. 6. Execute the query
The document opens inside Visual Studio and Before calling the ExecuteQuery method, all
Word becomes the document designer. the previous statements are queued and then
You can use the toolbox to place controls only sent to the server when execute query is
directly on the document surface as you would Figure 7 The Word Add-In at Work called. This way, you’re in control of the band-
a Windows Forms application. Here you add width and payloads. Once the query returns
Word content controls for the name and address information. with its results, the remaining code maps the data into a custom-
These content controls will be populated with data from the user’s ers list object that can be bound to the customer listbox control.
customer selection at run time. A WPF user control is used for this example as well. Because
To add a content control to the document, select the text on the the XAML is similar to the previous example, it isn’t shown here.
document that you want to wrap in the content control. Then drag However, the code to instantiate a document-level action pane
a RichTextContentControl from the Word Controls in the tool- rather than an application-level task pane is a bit different, as
box and drop it on the selected text. Then provide a Name for the you can see here:
control and a Text value in Properties. Do this for customer and public partial class ThisDocument {
private CustomersCRM CustomerActionPane =
company name, address, city and customer ID so your document new CustomersCRM();
looks like Figure 8.
private void ThisDocument_Startup(
Because the client object model does not provide strongly typed object sender, System.EventArgs e) {
data from the server, you need to add a Customer class to the project.
ElementHost host = new ElementHost();
The Customer class will be used to map data returned from the host.Dock = DockStyle.Fill;
client object model: host.Child = new CustomerPicker();
using System; CustomerActionPane.Controls.Add(host);
using System.Collections.Generic; this.ActionsPane.Controls.Add(CustomerActionPane);
using System.Linq; }
using System.Text; ...
Notice that the customer picker WPF user control is added to
namespace CSOM_Quote {
public class Customer { the ElementHost, the ElementHost object is added to the customer
public string CustomerID { get; set; } action pane controls collection, and then the customer action pane
public string CompanyName { get; set; }
public string ContactName { get; set; } is added to the actions pane controls collection.
public string Address { get; set; }
public string City { get; set; }
}
}
To use the client object model you need
to reference Microsoft.SharePoint.Client
and Microsoft.SharePoint.Client.Runtime.
As with the previous example, data retrieval
takes place in the OnIntitialized override
method. There are a couple of major differences
between coding against the client object model
and WCF Data Services. First, the client object
model expects that you have familiarity with
SharePoint and its structure. With WCF Data
Services, that’s abstracted away and you work
with the data only. Second, with the client
object model, the returned data is not
strongly typed. You’re responsible for getting
the data into objects that you can use for
LINQ queries and data binding.
The data access code is shown in Figure 9.
The client context is the central object here.
Pass the site URL to create a new instance of Figure 8 Creating the Quote Document
28 msdn magazine Office Add-Ins
Project1 12/2/09 12:51 PM Page 1

v4.0!
N o w

Why is Amyuni PDF


so interesting?

Proven High-Performance Rapid Integration


Choose a PDF technology that is Develop with the fastest PDF Integrate PDF conversion, creation
integrated into thousands of conversion on the market, designed and editing into your .NET and
applications behind millions of to perform in multithreaded and ActiveX applications with just a few
desktops worldwide. 64-bit Windows environments. lines of code.

Expertise OEM Licenses Customization


Produce accurate and stable PDF License and distribute products Let our experienced consultants
documents using reliable tools quickly and easily with a PDF help you turn your software
built by experts with over ten years technology that does not rely on requirements into customized
of experience. external open-source libraries. PDF solutions.

We understand the challenges that come with PDF integration.


From research and development, through design and
implementation, we work with you every step of the way.

Get 30 days of FREE technical support with your trial download!

www.amyuni.com
All trademarks are property of their respective owners. © 1999-2009 AMYUNI Technologies. All rights reserved.

USA and Canada Europe


Toll Free: 1 866 926 9864 Sales: (+33) 1 30 61 07 97
Support: (514) 868 9227 Support: (+33) 1 30 61 07 98
Info: [email protected] Customizations: [email protected]
Figure 9 CRM Add-In Data Access Code as hobbies, skills, schools and colleagues. Colleagues (or friends
protected override void OnInitialized(EventArgs e) {
as they’re called in other public social sites) are a key feature in the
SPClientOM.ClientContext context = SharePoint social structure.
new ClientContext("http://intranet.contoso.com/sites/spc");
SPClientOM.Web site = context.Web;
Another important aspect of social applications is what people
SPClientOM.ListCollection lists = site.Lists; think about content they encounter. SharePoint has a SocialData-
var theBCSList = lists.GetByTitle("Customers");
SPClientOM.CamlQuery cq = new SPClientOM.CamlQuery();
Service that enables users to tag, rate and comment on data, docu-
IQueryable<SPClientOM.ListItem> bcsListItems = ments and pages within your sites.
theBCSList.GetItems(cq);
bcsList = context.LoadQuery(bcsListItems);
The third important social aspect of SharePoint is publishing
context.ExecuteQuery(); activities and subscribing to activities that your colleagues gen-
var bcsCustomerData =
erate. SharePoint provides an ActivityFeed and APIs to publish
from cust in bcsList activities as a feed.
select new Customer {
CustomerID = cust.FieldValues.ElementAt(1).Value.ToString(),

Take the solution further


ContactName = cust.FieldValues.ElementAt(2).Value.ToString()
+ “ “
+ cust.FieldValues.ElementAt(3).Value.ToString(),

by adding code that interacts


CompanyName = cust.FieldValues.ElementAt(4).Value.ToString(),

Address = cust.FieldValues.ElementAt(5).Value.ToString(),

with the Word APIs.


City = cust.FieldValues.ElementAt(6).Value.ToString(), };

foreach (var x in bcsCustomerData) {


Customer tempCustomer = new Customer();
tempCustomer.CustomerID = x.CustomerID;
tempCustomer.CompanyName = x.CompanyName;
Because this isn’t an article on the new social features in
tempCustomer.ContactName = x.ContactName; SharePoint, we won’t go into more detail on these, but they do
tempCustomer.Address = x.Address;
tempCustomer.City = x.City;
provide some important context for the examples later in this
article. See the SharePoint Developer Center (msdn.microsoft.com/
customers.Add(tempCustomer);
}
sharepoint) or the “Managing Social Networking with Microsoft
Office SharePoint Server 2007” white paper (technet.microsoft.com/
customerListBox.DataContext = customers;
base.OnInitialized(e);
library/cc262436(office.12)) for more details.
}
Extending Outlook with Web Services
We’ve seen how SharePoint and Office provide a lot of choices
The last step is to add the button click event to populate the Word
when you’re determining the best way to access data for use in
content controls with the appropriate name and address information,
Office applications. Another way includes consuming SharePoint
as shown in Figure 10.
Web services. In this example, we’ll create a new Outlook Ribbon
First, you cast the selected listbox item to a customer object.
that lets you pull all of your SharePoint colleagues into Outlook
Then data from the customer object is used to populate the
as contact items. You’re even able to surface the user’s profile
content controls. The results will look like Figure 11.
picture into Outlook, just as you’re accustomed to seeing with
contacts provided by Microsoft Exchange.
Web Services as Social Services
Start by creating a new Outlook add-in in Visual Studio 2010.
So far you’ve seen a number of ways you can access SharePoint data
We’re going to write it in C#, but you could use Visual Basic if you
from Office client applications. The final technique we’ll look at is
prefer. In previous versions, Visual Basic had a slight advantage
using Web services. SharePoint offers Web services as the primary
with support for features such as optional parameters, but C# now
way to access SharePoint data remotely. Web services in SharePoint
supports them, too.
2010 gives you access to nearly all of the functionality in SharePoint
Server. Unlike some of the other data technologies you’ve seen, such Figure 10 Adding the Button Click Event
as REST and the client object model, Web services covers both ac- to Word Content Controls
cessing data and accessing administrative functionality.
private void button1_Click(
All of the Web services you love are still in there, such as the object sender, RoutedEventArgs e) {
Lists.asmx and Search.asmx services. SharePoint Web services are Customer customer =
(Customer)customerListBox.SelectedItem;
implemented as ASP.NET Web services with the .asmx extension,
and most of the new services in SharePoint 2010 are also written Globals.ThisDocument.wccContactName.Text =
customer.ContactName;
as ASMX services. This was mainly done to have the broadest Globals.ThisDocument.wccCompanyName.Text =
compatibility with other products and tools. customer.CompanyName;
Globals.ThisDocument.wccAddress.Text =
A new focus of SharePoint Web services is social services. The customer.Address;
center of all social applications is the user. SharePoint has a User- Globals.ThisDocument.wccCity.Text =
customer.City;
ProfileService that allows you to access all of the profile information Globals.ThisDocument.wccCustomerID.Text =
about a user. UserProfileService includes the standard properties customer.CustomerID;
}
such as name and address, but also includes other properties such
30 msdn magazine Office Add-Ins
Untitled-1 1 4/28/10 11:21 AM
The Ribbon provides a consistent and easy
way to interact with all of the Office applica-
tions. Outlook 2010 now includes a Ribbon
for the main screen. In this example, you’ll
add a new Ribbon here. Visual Studio 2010
makes it easy to create Office Ribbons with
a visual Ribbon Designer. You can simply
drag controls from the toolbox on the left
and drop them onto the design surface.
In this example, you just need to set some
properties, such as the label for the tab and
group. Next add a button control onto the
surface. Once you have a button added to
your Ribbon group, you can set the size to
large and set an image for the button. Your
Ribbon will look similar to Figure 12.
The last thing to do is set the property
to determine when the Ribbon will be dis-
played. By default, the Ribbon is displayed
on the Mail Explorer. This is the window
you see when you open a mail item. In this Figure 11 The CRM Add-In Within Word
sample, you want the Ribbon to display
on the main screen. Select the Ribbon and set the RibbonType and a contact is created in Outlook, you’re ready to call SharePoint
property to Microsoft.Outlook.Explorer. You can see there are a and add real contacts.
number of places where the Ribbon may appear, including the
Mail and Contact Explorers. Employing User Profile Service
Next, double-click on your Ribbon button to create a code- UserProfileService is a SharePoint Web service you can use to
behind click event handler. This is the event you’ll use to create access profile information, including a list of your colleagues and
the Outlook contact. their profile information. To use this service, start by adding a
You’re now ready to add the code that creates a contact in reference to your project. Because this is a Web service and not a
Outlook. Visual Studio 2010 makes this easy to do. I find it easier WCF service, you need to click the advanced tab of the Add Service
to break the problem into multiple smaller
parts. First, you created the Outlook add-in,
then you created the Ribbon. After each
of these steps, make sure you press F5 to
compile and run your application. Now
you can create an Outlook contact using
hard-coded values. After you verify that
this is working, you can add the code that
calls SharePoint. Again, at each step check
that everything is working correctly before
moving on to the next step.
Figure 13 shows the code to create a new
hard-coded contact. This uses the Create-
Item method to create a new ContactItem
object. Then you can set the properties of
the ContactItem and call the Save method
to commit the changes.
The only really challenging piece is that
the way to set the contact picture is to call
the AddPicture method, which takes a path
to a picture on disk. This is problematic
because you want to pull images from Share-
Point. You’ll see how to do this in the next
section. Once you verify that the code works Figure 12 Creating a New Outlook Ribbon
32 msdn magazine Office Add-Ins
Untitled-6 1 6/9/10 11:48 AM
Untitled-6 2 6/9/10 11:42 AM
Figure 13 Boilerplate Code to Create a Contact Finally, we grab the contact photo and save the new contact:
// Download the users profile image from SharePoint
Outlook.ContactItem newContact = SetContactImage(properties, newContact);
Globals.ThisAddIn.Application.CreateItem(
Outlook.OlItemType.olContactItem); newContact.Save();
newContact.FirstName = "Paul"; The last piece of the puzzle is retrieving the contact’s picture
newContact.LastName = "Stubbs"; from SharePoint. One of the extended properties includes a path
newContact.Email1Address = "[email protected]";
newContact.CompanyName = "Microsoft"; to a thumbnail of the user’s profile picture. You need to download
newContact.JobTitle = "Technical Evangelist"; this picture to a temporary file on disk so that the Outlook API can
newContact.CustomerID = "123456";
newContact.PrimaryTelephoneNumber = "(425)555-0111"; add it to the ContactItem object:
newContact.MailingAddressStreet = "1 Microsoft Way"; private static void SetContactImage(
newContact.MailingAddressCity = "Redmond"; PropertyData[] properties,
newContact.MailingAddressState = "WA"; Outlook.ContactItem newContact){
newContact.AddPicture(@"C:\me.png");
newContact.Save(); // Download image to a temp file
string userid = properties[16].Values[0].Value.ToString();
string imageUrl = properties[15].Values[0].Value.ToString();
dialog, then click the Add Web Service button. This opens the old string tempImage = string.Format(@"C:\{0}.jpg", userid);
Add Web Service dialog that you remember from Visual Studio 2005. WebClient Client = new WebClient();
Client.Credentials = CredentialCache.DefaultCredentials;
After you add the reference, you can add the code to retrieve Client.DownloadFile(imageUrl, tempImage);
your colleagues: newContact.AddPicture(tempImage);
}
// Instantiate the Web service.
UserProfileService userProfileService = That’s it! Now you have an Outlook add-in Ribbon that calls
new UserProfileService(); SharePoint to pull social data into Outlook contacts. When you run
// Use the current user log-on credentials. the application, you’ll see a ContactItem populated with SharePoint
userProfileService.Credentials = data, including the user’s profile information and image.
System.Net.CredentialCache.DefaultCredentials;
This code creates an instance of the service and passes your
current credentials to the service. Next, call the GetUserColleagues
Wrap Up
Now you’ve seen how easy it is to get data from SharePoint into
method passing the user that you want to retrieve colleagues for.
Office clients. We’ve shown you a variety of options from no-code
This will return an array of ContactData objects:
ContactData[] contacts =
solutions to highly adaptable solutions using C# or Visual Basic.
userProfileService.GetUserColleagues( Employing WCF Data Services to access SharePoint list data
"contoso\\danj");
provides a common pattern for .NET developers that’s quick and
easy to implement. The client object model provides the means to
SharePoint offers Web services access SharePoint external lists and opens a world of opportunities
for bringing LOB data into Office. And, finally, SharePoint Web
as the primary way to access services enables the most flexible access to data, but also requires
a bit more commitment in terms of coding and testing.
SharePoint data remotely. Making data in SharePoint available to users as lists is an important
step as it enables a great experience in the browser. Taking it a step fur-
ther, you can leverage a variety of data access options to then bring the
We can now loop through all of the ContactData objects that data into the Office applications that are familiar to users. Visual Studio
represent profile data for the user’s colleagues in SharePoint. We 2010 makes all of this much easier to build, debug and deploy. As you
retrieve the extended properties by calling the GetUserProfileBy- can see, these represent some of the new and important development
Name method, which returns an array of PropertyData that con- capabilities you can take advantage of with the new product releases.
tains key and value pairs for each colleague: More training, examples and information can be found
// Add each Colleague as an Outlook Contact
foreach (ContactData contact in contacts) { online in the Office (msdn.microsoft.com/office ) and SharePoint
// Get the users detailed Properties (msdn.microsoft.com/sharepoint) developer centers. „
PropertyData[] properties =
userProfileService.GetUserProfileByName(contact.AccountName);

// Create a new Outlook Contact


DONOVAN FOLLETTE is a Microsoft technical evangelist working with technologies
Outlook.ContactItem newContact = including Active Directory, Lightweight Directory Services and Active Directory
Globals.ThisAddIn.Application.CreateItem( Federation Services. He now focuses on Office development and building integrated
Outlook.OlItemType.olContactItem); solutions with SharePoint 2010. Visit his blog at blogs.msdn.com/donovanf.
Now we convert those key/value pairs into contact properties:
// Set the Contact Properties PAUL STUBBS is a Microsoft technical evangelist who focuses on the information
newContact.FullName = contact.Name;
worker development community for SharePoint and Office, Silverlight and Web
newContact.FirstName =
properties[2].Values[0].Value.ToString(); 2.0 social networking. He’s authored three books about solution development with
newContact.LastName = Office, SharePoint and Silverlight. Read his blog at blogs.msdn.com/pstubbs.
properties[4].Values[0].Value.ToString();
newContact.Email1Address =
properties[41].Values[0].Value.ToString(); THANKS to the following technical expert for reviewing this article:
... John Durant
msdnmagazine.com July 2010 33
© 1987-2010 ComponentOne LCC. All rights reserved. iPhone and iPod are trademarks of Apple Inc. All other product and brand names are trademarks and/or registered trademarks of their respective holders.

Untitled-3 2 5/27/10 11:02 AM


ComponentOne Sales: 1.800.858.2739 or 1.412.681.4343

Untitled-3 3 5/27/10 11:02 AM


SHAREPOINT SECURITY

Trim SharePoint Search


Results for Better Security
Ashley Elenjickal and Pooja Harjani

Microsoft SharePoint search uses an account that usually query processor that serves the queries. In the security trimming
has full read access across the repository to index its contents. So path, custom query trimming follows out-of-box security trimming.
it’s important that when a user queries for some content, he should So the number of query results after custom trimming must be
be restricted to view only the documents he has permission to see. equal to or less than the number of documents recalled before
SharePoint uses the access control list (ACL) associated with each registering the custom security trimmer (CST) assembly.
document to trim out query results that users have no permission Before delving into the CST architecture, we’ll provide a quick view
to view, but the default trimming provided by SharePoint (out-of- of SharePoint search and the new claims authentication infrastructure.
box trimming) may not always be adequate to meet data security
needs. In that case, you may want to further trim the results SharePoint Search Overview
depending on an organization’s authentication structure. At a high level, the search system can be divided into two discrete
This is where the SharePoint custom security trimming infra- parts: the gatherer pipeline and the query processor pipeline.
structure is useful. SharePoint lets you implement business logic Gatherer Pipeline This part is responsible for crawling and
in a separate module and then integrate it into the workflow of the indexing content from various repositories, such as SharePoint
sites, HTTP sites, file shares, Lotus Notes, Exchange Server and so
on. This component lives inside MSSearch.exe. When a request is
This article discusses:
issued to crawl a repository, the gatherer invokes a filter daemon,
• Claims authentication in SharePoint 2010 MssDmn.exe, to load the required protocol handlers and filters
• Deploying a custom security trimmer necessary to connect, fetch and parse the content. Figure 1 repre-
• Using PowerShell cmdlets sents a simplified view of the gatherer pipeline.
• Troubleshooting SharePoint can only crawl using a Windows NTLM authentication
account. Your content source must authorize the Windows account
Technologies discussed:
sent as part of the crawl request in order to access the document
SharePoint, custom security trimmer content. Though claims authentication is supported in SharePoint
Code download available at: 2010, the gatherer is still not a claims-aware application and will
code.msdn.microsoft.com/mag201007Search
not access a content source that has claims authentication only.
Query Processor Pipeline In SharePoint 2010, two of the

36 msdn magazine
most important changes in the query processor pipeline are in
CONTENT
its topological scalability and authentication model. In Microsoft REPOSITORIES
SharePoint Server (MOSS) 2007, the query processor (search query and
site settings service, referred to as search query service from here on) MSSearch.exe FILTER File Share
runs in the same process as Web front end (WFE), but in SharePoint DAEMON –
Gatherer SharePoint
2010 it can run anywhere in the farm—and it also runs as a Web service. MssDmn.exe
Sites
The WFE talks to the search query service through Windows Protocol Handlers
Communication Foundation (WCF) calls. The search query service & Filters HTTP Sites
is now completely built on top of the SharePoint claims authentica-
CATALOG/ Exchange
tion infrastructure. This decouples SharePoint search from its tight Server
INDEX
integration with Windows authentication and forms authentication.
As a result, SharePoint now supports various authentication models.
Figure 1 A Simplifed View of the SharePoint Gatherer Pipeline
The search query service trims the search results according to the
rights of the user who issues the query. Custom security trimmers CST associated with any of the URLs in the search results, it calls
are called by the search query service after out-of-box trimming into that trimmer. Trimmers are loaded into the same IIS worker
has completed. See Figure 2 for the various components involved process, w3wp.exe, in which the search query service is running.
when a query is performed. Once the trimmer is loaded, the search query service calls into
Custom security trimming is part of the query pipeline, so we’ll the CheckAccess method implemented inside the trimmer with
limit this discussion to components of the query pipeline. an out-of-box trimming result set associated with the crawl rule that
you defined earlier. The CheckAccess method decides whether a
Claims Authentication in SharePoint 2010 specific URL should be included in the final result set sent back to
A basic understanding of claims authentication support in the user. This is done by returning a bit array. Setting a bit inside
SharePoint 2010 is required to implement custom trimming logic this array to either true or false will “include” or “block” the URL
inside a CST assembly. In the claims authenticated environment, from the final result set. In case you want to stop processing the
the user identity is maintained inside an envelope called a security URLs due to performance or some unexpected reason, you must
token. It contains a collection of identity assertions or claims about throw a PluggableAccessCheckException. If you throw after
the user. Examples of claims are username, e-mail address, phone processing a partial list of URLs, the processed results are sent
number, role and so on. Each claim will have various attributes back to the user. The search query service will remove all the
such as type and value. For example, in a claim the UserLogonName unprocessed URLs from the final result set.
may be the type and the name of the user who is currently logged
in may be the value.
Security tokens are issued by an entity called a security token
service (STS). This is a Web service that responds to user authen- SharePoint can only
tication requests. Once the user is authenticated, STS sends back
a security token with all the user rights. STS can be configured crawl using a Windows NTLM
either to live inside the same SharePoint farm or act as a relying party
to another STS that lives outsides the farm: Identity Provider-STS authentication account.
(IP-STS) and Relying Party-STS (RP-STS), respectively. Whether
you want to use IP-STS or RP-STS has to be carefully considered
while designing SharePoint deployment. Steps Involved in Deploying a
SharePoint uses the default claims provider shipped with Custom Security Trimmer
the product in a simple installation. Even if you set up the farm In a nutshell, there are five steps involved in the successful deploy-
completely using Windows authentication, when a query is issued, ment of a CST:
a search service application proxy will talk to STS to extract all the 1. Implement ISecurityTrimmer2 interface.
claims of the user in a security token. This token is then passed to a. Implement Initialize and CheckAccess methods
the search query service through a WCF call. using managed code
b. Create an assembly signing file and include it as
Workflow of Custom Security Trimming part of the project
The workflow logic of a CST can be represented in a simple c. Build the assembly
flowchart as shown in Figure 3. 2. Deploy the trimmer into the Global Assembly Cache (GAC)
As stated earlier, the search query service first performs out-of- of all the machines where a search query service is running.
box security trimming and then looks for the presence of any CSTs 3. Create a crawl rule for the content sources that you want to custom
associated with the search results. The association of a particular trim. You can do this from the Search Administration site.
content source with a CST is done by defining a crawl rule for 4. Register the trimmer with the crawl rule using the Windows
that specific content source. If the search query service finds any PowerShell cmdlet New-SPEnterpriseSearchSecurityTrimmer.
msdnmagazine.com July 2010 37
6.
1. WFE – w3wp.exe WCF w3wp.exe
Query – Call OOB
HTTP with 10.
Security
Get User Trimmer
Request 2. 3. Claims 7.
BROWSER Search Query Search Search Search 11. OOB Trimmed
Center Object Service Service Query Results
SEARCH CENTER Web Part Model App Proxy App Service 12.
18. 17. 16. 15. 14.
Custom
13. Final Security
Search Trimmer(s)
Results

4. Get Claims Token for the User 5. Security Token 8. Fetch Results 9. Untrimmed Search Results

STS Index
Microsoft. Services
SharePoint.dll MSSearch.exe

w3wp.exe

Figure 2 Workflow of a Query Originating from the Search Center in a SharePoint Site

5. Perform a full crawl of the content sources associated with the process. Here's the signature of this method and a description
crawl rules that you created in step 3. A full crawl is required of how it works:
to properly update all of the related database tables. An incre- void Initialize(NameValueCollection staticProperties,
SearchServiceApplication searchApplication);
mental crawl will not update the appropriate tables.
staticProperties–The trimmer registration Windows PowerShell
Implementing the Custom Security cmdlet, New-SPEnterpriseSearchSecurityTrimmer, takes a param-
Trimmer Interface eter called “properties” (in MOSS 2007 this was called “configprops”)
through which you can pass named value pairs separated by ~. This
MOSS 2007 and Microsoft Search Server (MSS) 2008 sup-
may be useful to initialize your trimmer class properties.
ported custom security trimming of search results through the
For example: When passing “superadmin~foouser~poweruser~
interface ISecurityTrimmer. This interface has two methods, Ini-
baruser” to the New-SPEnterpriseSearchSecurityTrimmer cmdlet,
tialize and CheckAccess. Because of the architectural changes in
the NameValueCollection parameter will have two items in the
SharePoint and the search system in the 2010 versions, both of these
collection with keys as “superadmin” and ”poweruser” and values
methods won’t work as they did in MOSS 2007. They need to be
as “foouser” and “baruser,” respectively.
re-implemented using the ISecurityTrimmer2 interface. As a
searchApplication–If your trimmer requires a deeper knowl-
result, if you try to register a MOSS 2007 trimmer in SharePoint
edge about the search service instance and the SharePoint farm,
2010, it will fail, saying ISecurityTrimmer2 is not implemented.
use a searchApplication object to determine that information.
Other changes from MOSS 2007 include:
To learn more about the SearchServiceApplication class, refer to
Changes in the Initialize Method In MOSS 2007, one of the
msdn.microsoft.com/library/ee573121(v=office.14).
parameters passed was the SearchContext object. SearchContext was
ISecurityTrimmer2::CheckAccess Method This implements
the entry point into the search system and it provided the search con-
all the trimming logic. Pay special attention to two aspects in this
text for the site or search service provider (SSP). This class has been
method: the identity of the user who issued the query, and the
deprecated in 2010. Instead, use the SearchServiceApplication class:
void Initialize(NameValueCollection staticProperties,
performance latency caused by a large returned query set.
SearchServiceApplication searchApplication); Following are the signature of this method and a description
Changes in the CheckAccess Method In both MOSS 2007 of how it works:
and SharePoint 2010, the search query service calls into the CST public BitArray CheckAccess(IList<String>documentCrawlUrls,
IDictionary<String, Object>sessionProperties, IIdentitypassedUserIdentity)
assemblies. In MOSS 2007, the CheckAccess method took only
documentCrawlUrls–The collection of URLs to be security
two parameters, but in SharePoint 2010, the search query service
trimmed by this trimmer.
passes the user identity into CheckAccess using a third parameter
sessionProperties–A single query instance is treated as one session.
of type IIdentity:
public BitArray CheckAccess(IList<String>documentCrawlUrls,
If your query fetches many results, the CheckAccess method is
IDictionary<String, Object>sessionProperties, IIdentity passedUserIdentity) called multiple times. You can use this parameter to share values
ISecurityTrimmer2::Initialize Method This method is called or to keep track of the URLs processed between these calls.
the first time a trimmer is loaded into the search query service IIS passedUserIdentity–This is the identity of the user who issued the query.
worker process. The assembly will live for the duration of the worker It’s the identity by which the code will allow or deny access to content.
38 msdn magazine SharePoint Security
/update/2010/07
www.componentsource.com

BEST
BESTSELLER
SELLER LEADTOOLS Recognition SDK from $3,595.50
Add robust 32/64 bit document imaging & recognition functionality into your applications.
t'FBUVSFTBDDVSBUF IJHITQFFENVMUJUISFBEFE0$3BOEGPSNTSFDPHOJUJPO
t4VQQPSUTUFYU 0.3 JNBHF BOE%%CBSDPEFöFMET
t"VUPSFHJTUSBUJPOBOEDMFBOVQUPJNQSPWFSFDPHOJUJPOSFTVMUT
t*ODMVEFT/&5 $$ 81' 8' 8$'BOE4JMWFSMJHIUJOUFSGBDFT
t*ODMVEFTDPNQSFIFOTJWFDPOöEFODFSFQPSUTUPBTTFTTQFSGPSNBODF

BEST SELLER ContourCube from $900.00


OLAP component for interactive reporting and data analysis.
t&NCFE#VTJOFTT*OUFMMJHFODFGVODUJPOBMJUZJOUPEBUBCBTFBQQMJDBUJPOT
t;FSPSFQPSUDPEJOHEFTJHOSFQPSUTXJUIESBHBOEESPQ
t4FMGTFSWJDFJOUFSBDUJWFSFQPSUJOHHFUIVOESFETPGSFQPSUTCZNBOBHJOHSPXTDPMVNOT
t3PZBMUZGSFFPOMZEFWFMPQNFOUMJDFOTFTBSFOFFEFE
t1SPWJEFTFYUSFNFMZGBTUQSPDFTTJOHPGMBSHFEBUBWPMVNFT

BESTSELLER
BEST SELLER TX Text Control .NET and .NET Server from $499.59
Word processing components for Visual Studio .NET.
t"EEQSPGFTTJPOBMXPSEQSPDFTTJOHUPZPVSBQQMJDBUJPOT
t3PZBMUZGSFF8JOEPXT'PSNTUFYUCPY
t5SVF8:4*8:( OFTUFEUBCMFT UFYUGSBNFT IFBEFSTBOEGPPUFST JNBHFT CVMMFUT 
TUSVDUVSFEOVNCFSFEMJTUT [PPN EJBMPHCPYFT TFDUJPOCSFBLT QBHFDPMVNOT
t-PBE TBWFBOEFEJU%0$9 %0$ 1%' 1%'" 35' )5.- 595BOE9.-

BEST SELLER FusionCharts from $195.02


Interactive and animated charts for ASP and ASP.NET apps.
t-JWFOVQZPVS8FCBQQMJDBUJPOTVTJOHBOJNBUFE'MBTIDIBSUT
t$SFBUF"+"9FOBCMFEDIBSUTUIBUDBODIBOHFBUDMJFOUTJEFXJUIPVUJOWPLJOHTFSWFSSFRVFTUT
t&YQPSUDIBSUTBTJNBHFT1%'BOEEBUBBT$47GPSVTFJOSFQPSUJOH
t"MTPDSFBUFHBVHFT öOBODJBMDIBSUT (BOUUDIBSUT GVOOFMDIBSUTBOEPWFSNBQT
t6TFECZPWFS DVTUPNFSTBOE VTFSTJODPVOUSJFT

We accept purchase orders.


© 1996-2010 ComponentSource. All Rights Reserved. All prices correct at the time of press. Online prices may vary from those shown due to daily fluctuations & online discounts. Contact us to apply for a credit account.

US Headquarters European Headquarters Asia / Pacific Headquarters


ComponentSource ComponentSource ComponentSource Sales Hotline - US & Canada:
650 Claremore Prof Way 30 Greyfriars Road 3F Kojimachi Square Bldg
Suite 100
Woodstock
GA 30188-5188
Reading
Berkshire
RG1 1PE
3-3 Kojimachi Chiyoda-ku
Tokyo
Japan
(888) 850-9911
USA United Kingdom 102-0083 www.componentsource.com

Untitled-1 1 6/1/10 10:43 AM


BitArray–You need to return a bit array equal to the number of As a final note on the user context, limit the use of the API
items in documentCrawlUrls. Setting a bit inside this array to true WindowsIdentity.GetCurrent().Name to retrieve the user identity.
or false will determine whether the URL at that position should be This will always give the application pool identity under which
included or blocked from the final result set sent back to the user. search query service is running. System.Threading.Thread.Current-
UserIdentity The SharePoint 2010 search query engine is built Principal.Identity will give you the same identity as the one passed
upon the claims authentication model. The search query service to the CheckAccess method.
will pass the query issuer’s claims though the IIdentity parameter. Performance Considerations Optimize the CheckAccess
In order to get the user name of the user who issued the query, method to its fullest extent. If the query returns many results, the
you must traverse through a collection of claims to compare the trimmer may get called multiple times. One of the common methods
claim.ClaimType with the SPClaimTypes.UserLogonName. to take care of this situation is to keep track of the URLs processed
inside the trimmer through the sessionProperties parameter. Once
the method processes a certain number of result sets, it can throw a
The search query service PluggableAccessCheckException. When this exception is thrown,
the URLs processed up to that point are returned to the user.
trims the search results
Custom Security Trimmer and System Logs
according to the rights of the Code inside a trimmer can’t write to the system logs maintained at
<drive>\ Program Files\Common Files\Microsoft Shared\Web Server
user who issues the query. Extensions\14\LOGS. The trimmer must maintain its own logging
mechanism for both debugging and auditing. The only exception to
this is when the method throws the PluggableAccessCheckException.
The following snippet of code extracts the user logon name from The message string specified while throwing will be logged into the
the claims token: system log. Useful information that the search query service logs
IClaimsIdentity claimsIdentity = (IClaimsIdentity)passedUserIdentity;
SQS = Search Query Service
if (null != claimsIdentity) Query Request from the User
{
foreach (Claim claim in claimsIdentity.Claims)
{
if (claim == null)
continue; SQS – Process Query Request
if (SPClaimTypes.Equals(claim.ClaimType, SPClaimTypes.UserLogonName))
strUser = claim.Value;
} Fetch Search Results
}
You may need information about the type of authentication used SQS – Fetch Results from the Index
at the site collection level to correctly call internal APIs. To identify
Trim Results According to User Rights
if the user logged in using Windows authentication, look for the
presence of ClaimsType.PrimarySid. The following code looks for SQS – OOB Security Trimming
the PrimarySid claim and then extracts the user name from it:
if (SPClaimTypes.Equals(claim.ClaimType, ClaimTypes.PrimarySid)) Trim Results if There Are Any Custom Security Trimmers
{
// Extract SID in the format “S-1-5-21-xxxxx-xxxxx-xxx” No SQS – Any URLs in the Search Results
strUser = claim.Value; Match with Any of the Crawl Rules?
// Convert SID into NT Format “FooDomain\BarUser”
SecurityIdentifier sid = new SecurityIdentifier(strUser); Yes
strUser = sid.Translate(typeof(NTAccount)).Value;
} No SQS – Any CSTs Registered
For forms or other similar non-Windows authentication with the Matched Crawl Rules?
providers, look at the Claim.OriginalIssuer value inside the claim.
Yes
For example, if the server is configured for forms authentication
No
using the ASP.NET SQL Membership Provider, the Claim.Original- SQS – Is CST Already Loaded? Load CST
Issuer will have the value "Forms:AspNetSqlMembershipProvider":
if (SPClaimTypes.Equals(claim.ClaimType, SPClaimTypes.UserLogonName)) Yes
{
strUser = claim.Value;
CST – Trim Search Results
strProvider = claim.OriginalIssuer; // For AspNet SQL Provider value will be
// "Forms:AspNetSqlMembershipProvider"
}
If the query is issued by an anonymous user, the value of the
IIdentity.IsAuthenticated method will be false. In this case, Send Trimmed Results Back to the User
claimsIdentity.Name will have the value "NT AUTHORITY\\
ANONYMOUS LOGON." Figure 3 The Workflow Logic of a CST
40 msdn magazine SharePoint Security
Project3 12/16/09 11:55 AM Page 1
to the file includes the number of documents that were security and TypeName, which consists of the manifest data as well as
trimmed. For example, the following log entry suggests that a query the name of the class that implements the interface. Cmdlet
passed two documents to the CST, but sent zero documents back parameters are:
to the user, which means the CST trimmed those two documents: • SearchApplication–Name of the search service application
04/23/2010 18:13:48.67 w3wp.exe (0x116C) 0x02B4 SharePoint associated with the content source
Server Search Query Processor dm2e Medium Trim results:
First result position = '0', actual result count = '0', total docs found • TypeName–This consists of the manifest data such as Version,
= '0', total docs scanned = '2'. 742d0c36-ea37-4eee-bf8c-f2c662bc6a45 Culture and PublicKeyToken (it also points to the class that
Custom Security Trimmers and Alerts The SharePoint implements the interface; this will uniquely identify the
search service has a feature called alerts (available only in Windows assembly from the GAC)
authentication mode) that can push the changes in the query results • RulePath–The crawl rule associated with the trimmer
to the user through e-mails. However, when an alert query is issued • Id–An int data type that uniquely identifies the trimmer instance
by the timer service, the search query service will strip out all the • Properties–Set of name/value pairs separated by ~
URLs associated with CSTs. View–Use the Get-SPEnterpriseSearchSecurityTrimmer cmdlet
and pass the search application name. You can further filter it by
passing trimmer identity or other properties that you used while
Use Windows PowerShell cmdlets registering (for example: Get-SPEnterpriseSearchSecurityTrimmer
-SearchApplication "Search Service Application").
to register, view and delete CSTs. Delete–Use the Remove-SPEnterpriseSearchSecurityTrimmer
cmdlet and pass the search application name as well as identity of
the trimmer (for example: Remove-SPEnterpriseSearchSecurity-
Assembly Signing Requirement On finding the presence Trimmer -SearchApplication "Search Service Application" –id 102).
of a matching CST, the search query service calls into CST man- Note: After registering the CST, a full crawl of the content source
agement code to load the specific assembly from the GAC. To do is required.
this, the assembly needs to be digitally signed. Refer to “Managing
Assembly and Manifest Signing” (msdn.microsoft.com/library/ms247066) Troubleshooting Steps
for ways to sign an assembly. Once the assembly is built, use the Here are some tips to investigate any unexpected search results:
sn.exe tool to get the 64-bit hash known as a public key token. This • Make sure the crawl rule matches the content source location.
token is needed at the time of trimmer registration. • Check the crawl logs to make sure the account used to crawl
Deployment of Custom Security Trimmer The CST the content source has access to it. The crawl would have failed
assembly must reside in the GAC of each machine on which the if it doesn’t.
search query and site settings service is running. Use Central • Make sure the query user has permission to view the content.
Administration | System Settings | Services on Server to check the • After trimmer registration, make sure you performed a full crawl.
status of the search query and site settings service in each of the • Make sure the trimmer assembly is in the GAC of all machines
machines in the farm. If the service is started, you must import in which search query service is running.
the CST to that machine. Don’t confuse the search query and site • Check the system logs for the number of documents trimmed
settings service with the machines that contain query components. by the security trimmer.
The query component lives within MSSearch.exe to pull the results • Use the utility ProcessExplorer from technet.microsoft.com/sysinternals/
from the index. The search query and site settings service lives in bb896653 to make sure the trimmer assembly is loaded into IIS
its own IIS worker process of w3wp.exe. worker process w3wp.exe.
• Attach the debugger to the worker process in which the assem-
SharePoint Cmdlets to Register, View and Delete CSTs bly is loaded and step through the trimmer logic.
MOSS 2007 used the stsadm.exe command-line tool to register
custom trimmers, but this tool is obsolete and not supported in Query Processing Logic Flexibility
SharePoint 2010. Instead, use Windows PowerShell cmdlets to reg- Wrapping up, CSTs provide the flexibility to extend the query processing
ister, view and delete CSTs. An assembly should already be available logic to meet customized enterprise security needs. One should always
in the GAC to register them. Here’s how to use them: keep in mind that implementation bugs inside the trimmer may cause
Registration–Use the New-SPEnterpriseSearchSecurityTrimmer unexpected search results, so it’s important that before the trimmer is
to register your trimmer, using the assembly’s manifest data such as deployed in a production environment, it’s thoroughly tested against
Version, Culture and PublicKeyToken. This example registers the different types of content sources and authentication providers. „
trimmer to the search application named “search service application”:
New-SPEnterpriseSearchSecurityTrimmer -SearchApplication "Search ASHLEY ELENJICKAL AND POOJA HARJANI were part of a SharePoint Search
Service Application" -TypeName "SearchCustomSecurityTrimmer.
CustomSecurityTrimmerTest, SearchCustomSecurityTrimmer, Version=14.0.0.0, feature team responsible for Custom Security Trimmer at Microsoft. They can be
Culture=neutral, PublicKeyToken=4ba2b4aceeb50e6d" -RulePath file:// reached at [email protected] and [email protected], respectively.
elenjickal2/* -id 102 -Properties superadmin~foouser~poweruser~baruser
The cmdlet takes the crawl rule (RulePath), an integer value as the THANKS to the following technical expert for reviewing this article:
identity (id) of the trimmer, configuration properties (properties) Michal Piaseczny
42 msdn magazine SharePoint Security
Untitled-1 1 6/9/10 11:03 AM
O N E N OT E 2 0 1 0

Creating OneNote 2010


Extensions with the
OneNote Object Model
Andy Gray

Microsoft Office OneNote is a powerful digital notebook (and, soon, Windows Phone 7). Further, OneNote was previously
for collecting, organizing, searching and sharing information. With included only in some Office editions, but it’s now in every
the recent release of Microsoft Office 2010, not only is the OneNote edition of Office 2010. All of these factors create a more compelling op-
user experience improved, but OneNote notebooks are now more portunity than ever before to integrate OneNote into information
universally available. Users can synchronize content among com- management solutions.
puters via Windows Live; search, edit and share notes from any In this article, I’ll provide an overview of developing applications
Web browser; and access full notebooks from Windows Mobile that interoperate with data from Microsoft OneNote 2010 and 2007.
In the process, I’ll introduce the OneNote Object Model project
that is freely available on CodePlex and demonstrate how this
The OneNote Object Model library on CodePlex, to which
this article refers, had not been updated for compatibility with
library makes it easy to integrate information from OneNote
OneNote 2010 at the time of this writing. notebooks, sections and pages into client applications.
This article discusses:
The Evolution of OneNote Development
• The evolution of OneNote development The initial release of OneNote 2003 didn’t provide an API to external
• Accessing OneNote data using the COM API applications. Shortly thereafter, however, OneNote 2003 SP 1 added
• Retrieving and updating page content using the COM API a COM library, called the OneNote 1.1 Type Library, which enabled
• The OneNote Object Model Library programmatic import of images, ink and HTML into OneNote
via a simple class called CSimpleImporter. Notably, however, this
• Data binding with the OneNote Object Model Library
class only provided data import capabilities; you could use it to
Technologies discussed: push data into OneNote notebooks, but there was no way to get
OneNote 2010, OneNote 2007, Visual Studio 2010, LINQ, content back out programmatically.
OneNote Object Model, XAML Data Binding, Windows The release of OneNote 2007 brought much more powerful
Presentation Foundation, C# development capabilities with a new COM API that provides
Code download available at: the ability to import, export and modify OneNote 2007 content
code.msdn.microsoft.com/mag201007OneNote
programmatically. The OneNote Application class in that library
provides a rich collection of methods for working with:

44 msdn magazine
• Notebook structure: discovering, opening, modifying, Figure 1 Enumerating Notebooks
closing and deleting notebooks, section groups and sections using System;
• Page content: discovering, opening, modifying, saving using System.Linq;
using System.Xml.Linq;
and deleting page content using Microsoft.Office.Interop.OneNote;
• Navigation: finding, linking to and navigating to pages
class Program
and objects {
Most of these methods return or accept XML documents that static void Main(string[] args)
{
represent both notebook structure and page content. Saul Candib var onenoteApp = new Application();
wrote a two-part series, “What’s New for Developers in OneNote
string notebookXml;
2007,” that documents this API at msdn.microsoft.com/library/ onenoteApp.GetHierarchy(null, HierarchyScope.hsNotebooks, out notebookXml);
ms788684(v=office.12), and the XML schema is detailed at msdn.microsoft.com/
var doc = XDocument.Parse(notebookXml);
library/aa286798(office.12). var ns = doc.Root.Name.Namespace;
The XML schema for OneNote 2010 is substantially similar to that foreach (var notebookNode in
from node in doc.Descendants(ns + "Notebook") select node)
in OneNote 2007. OneNote 2010 introduces a file format change {
to support some of its new features (such as linked note-taking, Console.WriteLine(notebookNode.Attribute("name").Value);
}
versioning, Web sharing, multilevel subpages and equation sup- }
port). However, OneNote 2010 can continue to work on One- }
Note 2007 notebooks without changing the file format. In
OneNote 2010, retrieving data from sections stored in the the OneNote API. The code in Figure 1 uses the GetHierarchy
OneNote 2007 file format will yield XML documents simi- method to retrieve an XML document containing a list of One-
lar to those in OneNote 2007. The primary differences in the Note notebooks, then uses LINQ to XML to extract and print
XML schema for OneNote 2010 sections are additive changes the notebook names to the console.
to support the new features listed earlier. A new XMLSchema The HierarchyScope enumeration, passed as the second pa-
enumeration is available to represent the OneNote schema version; rameter to the GetHierarchy method, specifies the depth of the
many of the OneNote methods have new overloads that take an notebook structure to retrieve. To retrieve sections in addition to
XMLSchema parameter to indicate the schema version desired. the notebooks, simply update this enumeration value to Hierarchy-
Note that the CSimpleImporter class, introduced in OneNote Scope.hsSections and process the additional XML child nodes, as
2003 and still available in OneNote 2007, has been removed from demonstrated in Figure 2.
OneNote 2010, so applications that use this class need to be rewritten
to use the new interfaces in order to work with OneNote 2010. Retrieving and Updating Page Content
The GetPageContent method will return an XML document
Accessing OneNote Data Using the COM API containing all of the content on a specified page. The page to
It’s fairly straightforward to start using the OneNote COM API to retrieve is specified using a OneNote object ID, a string-based
access live data from OneNote notebooks. Start by creating a new
console application in Visual Studio and then add a reference to Figure 2 Enumerating Sections
the Microsoft OneNote 14.0 Type Library COM component (for
using System;
OneNote 2010) or the Microsoft OneNote 12.0 Type Library COM using System.Linq;
component (for OneNote 2007). using System.Xml.Linq;
using Microsoft.Office.Interop.OneNote;
If you’re using Visual Studio 2010 to develop OneNote 2010
applications, take note of a couple minor compatibility issues. First, class Program
{
due to a mismatch of the OneNote interop assembly that shipped static void Main(string[] args)
with Visual Studio 2010, you should not directly reference the {
var onenoteApp = new Application();
Microsoft.Office.Interop.OneNote component on the .NET tab of the
Add Reference dialog, but instead reference the Microsoft OneNote string notebookXml;
onenoteApp.GetHierarchy(null, HierarchyScope.hsSections, out notebookXml);
14.0 Type Library component on the COM tab. This still results in the
addition of a OneNote interop assembly to your project’s references. var doc = XDocument.Parse(notebookXml);
var ns = doc.Root.Name.Namespace;
Second, the OneNote 14.0 Type Library is not compatible with foreach (var notebookNode in from node in doc.Descendants(ns +
the Visual Studio 2010 “NOPIA” feature (in which primary interop "Notebook") select node)
{
assemblies are not embedded in the application by default). There- Console.WriteLine(notebookNode.Attribute("name").Value);
fore, make sure to set the Embed Interop Types property to False foreach (var sectionNode in from node in
notebookNode.Descendants(ns + "Section") select node)
for the OneNote interop assembly reference. (Both of these {
issues are described in more detail on OneNote Program Manager Console.WriteLine(" " + sectionNode.Attribute("name").Value);
}
Daniel Escapa’s blog at blogs.msdn.com/descapa/archive/2010/04/27/ }
onenote-2010-and-visual-studio-2010-compatibility-issues.aspx.) With the }
}
OneNote library reference in place, you’re ready to make calls to
msdnmagazine.com July 2010 45
Figure 3 Getting Page Content Note Object Model comes in. It’s a managed code library that
using System;
provides object-oriented abstractions over the COM-based One-
using System.Linq; Note API. The library is open source and licensed under the Microsoft
using System.Xml.Linq;
using Microsoft.Office.Interop.OneNote;
Public License (Ms-PL).
The OneNote Object Model is available for download on Code-
class Program
{
Plex at onom.codeplex.com. The library was designed for OneNote
static void Main(string[] args) 2007, and by the time you read this, the release downloads should
{
var onenoteApp = new Application();
be updated to provide compatibility with OneNote 2010. If not,
you can still use it with OneNote 2007 sections in OneNote
string notebookXml;
onenoteApp.GetHierarchy(null, HierarchyScope.hsPages, out notebookXml);
2010 by downloading the source code, removing the existing Micro-
soft.Office.Interop.OneNote assembly reference in the OneNote-
var doc = XDocument.Parse(notebookXml);
var ns = doc.Root.Name.Namespace;
Core project and adding a reference to the Microsoft OneNote 14.0
var pageNode = doc.Descendants(ns + "Page").Where(n => Type Library as shown previously.
n.Attribute("name").Value == "Test page").FirstOrDefault();
if (pageNode != null)
In addition to some unit test projects and sample code, the
{ solution contains two class library projects: OneNoteCore and
string pageXml;
onenoteApp.GetPageContent(pageNode.Attribute("ID").Value, out pageXml);
OneNoteFramework. The OneNoteCore library is the low-level
Console.WriteLine(XDocument.Parse(pageXml)); bridge between the OneNote COM API and familiar Microsoft
}
}
.NET Framework metaphors; it exposes real return values instead
} of COM out parameters, converts COM error codes into .NET
exceptions, exposes a OneNoteObjectId struct and XDocument
unique identifier for each object in the OneNote notebook instances instead of raw strings, and more. Studying this code can
hierarchy. This object ID is included as an attribute on the XML help you understand how the OneNote API works, but in most cases
nodes returned by the GetHierarchy method. you won’t need to interact with the OneNoteCore library directly.
Figure 3 builds on the previous examples by using the Get- The OneNoteFramework library provides higher-level
Hierarchy method to retrieve the OneNote notebook hierarchy abstractions of OneNote concepts. Here you’ll find classes with
down to page scope. It then uses LINQ to XML to select the node intuitive names like OneNoteNotebook, OneNoteSection and
for the page named “Test page” and pass that page’s object ID to OneNotePage. The primary entry point for interacting with the
the GetPageContent method. The XML document representing OneNote hierarchy structure is a class called OneNoteHierarchy,
the page content is then printed to the console. which contains a static member called Current. By adding an
The UpdatePageContent method can be used to make changes to
a page. The page content is specified by the same XML document Figure 4 Updating Page Content
schema that the code in Figure 3 retrieved; it can contain various using System;
content elements that define text outlines, inserted files, images, using System.Linq;
using System.Xml.Linq;
ink, and audio or video files. using Microsoft.Office.Interop.OneNote;
The UpdatePageContent method treats the elements in the
class Program
provided XML document as a collection of content that may have {
changed, matching specified content to existing content via its static void Main(string[] args)
{
OneNote object ID. You can therefore make changes to existing var onenoteApp = new Application();
content by calling the GetPageContent method, making the
string notebookXml;
desired changes to the XML returned, then passing that XML onenoteApp.GetHierarchy(null, HierarchyScope.hsPages, out notebookXml);
back to the UpdatePageContent method. You can also specify new
var doc = XDocument.Parse(notebookXml);
content elements to be added to the page. var ns = doc.Root.Name.Namespace;
To illustrate this, Figure 4 adds a date stamp to the bottom of var pageNode = doc.Descendants(ns + "Page").Where(n =>
our test page. It uses the approach shown in Figure 3 to determine n.Attribute("name").Value == "Test page").FirstOrDefault();
var existingPageId = pageNode.Attribute("ID").Value;
the OneNote object ID of the page, and then uses the XDocument
if (pageNode != null)
and XElement classes in System.Xml.Linq to construct an XML {
document containing the new content. Because the Page object var page = new XDocument(new XElement(ns + "Page",
new XElement(ns + "Outline",
ID specified in the document matches the object ID of an existing new XElement(ns + "OEChildren",
page, the UpdatePageContent method will append the new new XElement(ns + "OE",
new XElement(ns + "T",
content to the existing page. new XCData("Current date: " +
DateTime.Now.
ToLongDateString())))))));
The OneNote Object Model Library page.Root.SetAttributeValue("ID", existingPageId);
onenoteApp.UpdatePageContent(page.ToString(), DateTime.MinValue);
It isn’t particularly difficult to interact with OneNote data in this }
way, but it’s a bit awkward to parse and construct XML docu- }
}
ments just to perform basic data operations. That’s where the One-
46 msdn magazine OneNote 2010
SNMP
SMTP SFTP
FTP
2IP
POP HTTP

ENTERPRISE
TELNET WEB UDP
UI SSH
SSL EMULATION TCP

Internet Connectivity for the Enterprise


Since 1994, Dart has been a leading provider of high quality, high performance Internet connectivity components supporting a wide
range of protocols and platforms. Dart’s three product lines offer a comprehensive set of tools for the professional software developer.

PowerSNMP for ActiveX and .NET PowerTCP for ActiveX and .NET
Create custom Manager, Agent and Trap applications with a set Add high performance Internet connectivity to your ActiveX, .NET
of native ActiveX, .NET and Compact Framework components. and Compact Framework projects. Reduce integration costs with
SNMPv1, SNMPv2, SNMPv3 (authentication/encryption) and detailed documentation, hundreds of samples and an expert
ASN.1 standards supported. in-house support staff.

PowerWEB for ASP.NET SSH FTP SMTP DNS Telnet


UDP SFTP IMAP Rlogin VT Emulation
AJAX enhanced user interface controls for responsive ASP.NET
TCP HTTP S/MIME Rsh ZIP Compression
applications. Develop unique solutions by including streaming file
upload and interactive image pan/zoom functionality within a page. SSL POP Ping Rexec more...

Ask us about Mono Platform support. Contact [email protected].


Download a fully functional product trial today!

Untitled-1 1 1/11/10 11:10 AM


Figure 5 Data Binding with Windows Presentation Foundation
<Window x:Class="NotebookTree.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:onf="clr-namespace:Microsoft.Office.OneNote;assembly=
OneNoteFramework"
Title="OneNote Notebook Hierarchy" >
<Grid>
<Grid.Resources>
<DataTemplate x:Key="PageTemplate">
<StackPanel Orientation="Horizontal">
<Image Source="Images\Page16.png" Margin="0,0,2,0"/>
<TextBlock Text="{Binding Name}" />
</StackPanel>
</DataTemplate>
Figure 6 Data Binding the Hierarchy to a Tree View
<HierarchicalDataTemplate x:Key="SectionTemplate"
ItemsSource="{Binding Pages}" Collections exposed by OneNote Object Model properties
ItemTemplate="{StaticResource PageTemplate}">
<StackPanel Orientation="Horizontal"> are managed with a specialized generic collection class called
<Image Source="Images\Section16.png" Margin="0,0,2,0"/> OneNoteObjectCollection<T>. Because OneNoteObjectCollection<T>
<TextBlock Text="{Binding Name}" />
</StackPanel> implements IList<T>, as well as IEnumerable<T>, these collections
</HierarchicalDataTemplate> can be queried using LINQ.
<HierarchicalDataTemplate x:Key="NotebookTemplate" For example, given a reference to a OneNoteSection instance
ItemsSource="{Binding Sections}" in the section variable, we could determine all of the pages that
ItemTemplate="{StaticResource SectionTemplate}">
<StackPanel Orientation="Horizontal"> had been modified today with a simple LINQ expression like this:
<Image Source="Images\Book16.png" Margin="0,0,2,0"/> var pagesModifiedToday = from page in section.Pages
<TextBlock Text="{Binding Name}" /> where page.LastModifiedTime >= DateTime.Today
</StackPanel> select page;
</HierarchicalDataTemplate>
</Grid.Resources>

<TreeView Name="NotebookTree" BorderThickness="0"


Data Binding with OneNote Object Model Library
HorizontalAlignment="Left" VerticalAlignment="Top" The fact that the OneNote Object Model exposes IEnumerable
ItemsSource="{Binding Notebooks}"
ItemTemplate="{StaticResource NotebookTemplate}"
collections also enables XAML-based data binding with Windows
DataContext="{Binding Source={x:Static Presentation Foundation (WPF). Figure 5 d demonstrates the use
onf:OneNoteHierarchy.Current}}" />
</Grid>
of data binding to display a WPF TreeView of the OneNote note-
</Window> book hierarchy purely in XAML markup—without requiring the
use of code behind.
This XAML first references the OneNoteFramework assembly,
assembly reference to the OneNoteFramework library, we can giving it the XML namespace prefix onf. With this reference in
rewrite to our program to enumerate the notebook names (Figure place, the DataContext for the TreeView can then be set to the
1) much more concisely as follows: static Current property of the OneNoteHierarchy class, provid-
using Microsoft.Office.OneNote;
ing the control with the root of the OneNote hierarchy structure.
class Program HierarchicalDataTemplates are then used to data bind each level
{
static void Main(string[] args)
of the tree with the corresponding collection exposed by the
{ OneNote Object Model (see Figure 6).
foreach (var notebook in OneNoteHierarchy.Current.Notebooks)
System.Console.WriteLine(notebook.Name);

}
}
Simplified Data Access
As you might expect, the OneNoteNotebook class has a property Wrapping up, the OneNote Object Model library substantially
called Sections. Therefore, you can enumerate the section names simplifies access to data in Microsoft OneNote notebooks, expos-
(Figure 2) simply as follows: ing rich object collections that can be queried and manipulated
using Microsoft.Office.OneNote; with LINQ expressions and WPF data binding. A follow-up ar-
ticle will extend these concepts to explore working with OneNote
class Program
{ notebooks in Silverlight and Windows Phone applications, and
static void Main(string[] args) accessing OneNote data in the cloud. „
{
foreach (var notebook in OneNoteHierarchy.Current.Notebooks)
{
System.Console.WriteLine(notebook.Name); ANDY GRAY is a partner and technology director of Five Talent Software, helping
foreach (var section in notebook.Sections)
nonprofit organizations operate more effectively through strategic technology
{
System.Console.WriteLine(" " + section.Name); solutions. He writes about OneNote development at onenotedev.com.
}
}
} THANKS to the following technical experts for reviewing this article:
} Michael Gerfen and John Guin
48 msdn magazine OneNote 2010
Untitled-1 1 1/11/10 10:55 AM
OFFICE SERVICES

Merging Word
Documents on the
Server Side with
SharePoint 2010
Ankush Bhatia and Manvir Singh

Business application developers must often create that provide a central point where all of this repetitive work can
solutions that automate day-to-day activities for their organizations. be addressed for multiple users without any human intervention.
These activities typically involve processing and manipulating data Although moving solutions that complete repetitive Office tasks
in various documents—for example, extracting and consolidating from the desktop to a server seems straightforward, it’s not quite
data from multiple source documents, merging data into e-mail as simple as it sounds.
messages, searching and replacing content in documents, recalcu- Microsoft designed the Office application suite for desktop
lating data in workbooks, extracting images from presentations ... computer scenarios where a user is logged on to a machine and
and the list goes on and on. is sitting in front of it. For reasons of security, performance and
Microsoft Office makes these kinds of repetitive tasks simpler by reliability, Office applications are not the right tools for server-side
providing a rich API that developers can use to automate them. Because scenarios. Office applications in a server environment may require
such solutions work seamlessly for normal desktop users, developers manual intervention, and that’s not optimal for a server-side
have taken them to the next level: deploying the solutions to servers solution. Microsoft recommends avoiding this kind of solution,
as explained in the Microsoft Support article, “Considerations for
server-side Automation of Office” (support.microsoft.com/kb/257757).
This article discusses:
Since the release of Office 2007, however, the Office automation
• The status report template story has changed a great deal. With Office 2007 Microsoft introduced
• Creating a SharePoint document library Office OpenXML and Excel Services for developers who would
• Building the Web Part like to develop Office-based solutions on the server.
• Merging the reports With Office 2010 and SharePoint 2010, Microsoft has come up
with a new set of components called Application Services. These
Technologies discussed:
put a rich set of tools in a developer’s bag for Office automation
Office 2010, SharePoint 2010 solutions. Application Services include Excel Services, Word
Code download available at: Automation Services, InfoPath Forms Services, PerformancePoint
code.msdn.microsoft.com/mag201007DocMerge
Services and Visio Services. You can learn more about the details
of these services at msdn.microsoft.com/library/ee559367(v=office.14).

50 msdn magazine
Project Managers SharePoint Stores Group Manager
Individual
Status Reports Group Manager Requests
Consolidated Status Report
Upload Weekly Status Reports for All Projects

Figure 1 Workflow for Generating a Status Report

In this article, we will show you how to use Office OpenXML, 1. Reads all of the individual status report documents.
Word Automation Services and SharePoint to build a simple appli- 2. Merges them into a single report.
cation that merges separate status reports into a single document. 3. Saves the report in the repository for users to access.

Status Report Workflow


Let’s say you’re a developer working at a services-oriented company Microsoft originally designed
the Office application suite for
in which many projects are managed by different teams. Every week,
each project manager uses a common template to create a weekly

desktop computer scenarios.


status report and upload it to an internal SharePoint repository.
Now your Group Manger wants to get a consolidated report that
will contain all of these of weekly status reports and, guess what,
you are the chosen one who has to implement this requirement. Figure 2 shows what the status report template will look like
You’re lucky, though. As we mentioned earlier, your life is easier (let’s call it WeeklyStatusReport.dotx). As you can see, the template
today because you can implement this requirement with much less includes fields to capture a title, dates, the project manager’s name,
effort using OpenXML and Word
Automation Services. You’ll be able
to produce a more robust and sta-
ble solution than you could have
without these technologies.
Let’s start by visualizing the sol-
ution. Figure 1 shows a proposed
workflow. The process kicks off
with individual project managers
filling out status reports and up-
loading them to SharePoint on the
server. The Group Manager can
then initiate the process of merg-
ing any reports stored on the server
and generating a combined report.

Building a Template
To implement this solution, the
first step is to provide a common
template to all the project manag-
ers for filling out the weekly status
reports. When they finish filling
in the data, they’ll upload the
reports to a SharePoint repository.
On Monday morning, the Group
Manager can then log into the
SharePoint site and fire up the logic
that performs the following tasks: Figure 2 Weekly Status Report Template

msdnmagazine.com July 2010 51


Next, go to WSR Library and
select Library Settings. Under
General Settings, select Advanced
Settings. Select Yes for “Allow
management of content types,”
then click OK.
You’ll see a list of Content types
shown on the library setting page.
Select the “Add from Existing Site
Content Types” link. Select the
content type you created earlier
in the available site content types
list. In my example, this is Weekly
Status Report. Click Add, and
click OK.
Again from the content types
list, click on Document and select
“Delete this content type.” Select
OK in the warning message box.
Figure 3 Selecting the Custom Content Type Now you should see your con-
tent type when you select New
milestones and associated data, and text fields for entering details Document in your WSR Library, as shown in Figure 3.
about accomplishments, future plans and problems. In this case At this point you can go ahead and add a couple of status reports
we’ve used text fields and the date picker control for simplicity, but to the document library.
you could easily use drop-down lists, check boxes or a variety of
other controls to streamline data entry. Creating the Web Part
Next, you need to enable a Group Manager to kick off the con-
solidation logic. You can do this via a button at the bottom of the
The Document Library
default view of the document library.
The next step is to create a custom document library that hosts the
There are two steps involved here. First, you’ll create a Visual Web
weekly status reports based on this template.
Part using Visual Studio 2010. Second, you’ll add the Web Part to
In the SharePoint navigation pane, click Libraries and then
the document library using SharePoint Designer 2010.
Create to create a new library. In the Create dialog, filter by Library,
To create a custom Web Part, start a new project in Visual Studio
select Document Library and type a name for the library (we used
2010 using the Visual Web Part project template. Give the project
WSR Library). Now click Create.
a name such as DocumentMerge, then click OK.
In the SharePoint Customization Wizard page, select your Web
At this point you have application (the URL to the SharePoint site hosting your document
library), then click Finish.
the logic in place to generate Once the project is created, open the VisualWebPart1.cs file and
modify the CreateChildControls method with the following code:
fully functional consolidated protected override void CreateChildControls() {
Control control = Page.LoadControl(_ascxPath);

documents on the server.


Controls.Add(control);
base.CreateChildControls();
Button btnSubmit = new Button();
btnSubmit.Text = "Merge Reports";
btnSubmit.Click += new EventHandler(OnSubmitClick);
Now you need to create a content type for the new library. Click Controls.Add(btnSubmit);
Site Actions, then Site Settings, and under the Galleries section, }

click Site content types. Click Create and then type a name for the Also add an event handler for the button click:
void OnSubmitClick(object sender, EventArgs e) {
content type (we used Weekly Status Report). // TODO : Put code to merge documents here
In the Select Parent Content Type From list, select Document }
Content Types. In the Parent Content type list, select Document At this point you can build and deploy your project. We will
and click OK. add the implementation to our OnSubmitClick handler a bit later
Under Settings, select Advanced Settings, then choose the in this article.
“Upload a new document template” radio button and click The next step is to add the Web Part to the document library.
Browse. Find the report template (WeeklyStatusReport.dotx) and In SharePoint Designer 2010, open the SharePoint site. Click All
upload it to the library. Files | WSR Library | Forms, then click on AllItems.aspx to edit it.
52 msdn magazine Office Services
Fast
Data
Char
ts

Geosp
atial
Maps

Silverlight
Pivot
Grids

You’ve got the data, but time, budget and staff


constraints can make it hard to present that valuable
information in a way that will impress. With Infragistics’
NetAdvantage for Silverlight Data Visualization, you
can create Web-based data visualizations and
dashboard-driven applications on Microsoft Silverlight
(and coming soon for WPF) that will not only impress
decision makers, it actually empowers them. Go to
infragistics.com/sldv today and get inspired to create
killer apps.

Infragistics Sales 800 231 8588


Infragistics Europe Sales +44 (0) 800 298 9055
Infragistics India +91-80-6785-1111
twitter.com/infragistics
Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics and the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc.

Untitled-12 1 4/9/10 2:28 PM


to provide logic for reading the
reports that were uploaded to
the document library, generating
an empty OpenXML document,
and merging the reports into the
new document.
First, you need to read any
documents in the current library.
You can loop through the SPList-
ItemCollection of the current
SPContext, reading each file into a
byte array using the SPFile.Open-
Binary API:
SPListItemCollection files =
SPContext.Current.List.Items;
foreach (SPListItem item in files) {
SPFile inputFile = item.File;
byte[] byteArray =
inputFile.OpenBinary();

// process each byte array


}
Next, generate the empty
OpenXML document. This
Figure 4 Inserting the Web Part
requires generating the document
in memory using a MemoryStream
Click the bottom of the page. Click Insert | Web Part, and then because the OpenXML SDK does not let you save documents to
select More Web Parts. In the search box, type VisualWebPart (the a URI. Instead, the MemoryStream object can dump the docu-
ment into the library as a new file. The code for creating the file is

The altChunks get replaced


shown in Figure 6.
Note that you need to add DocumentFormat.OpenXml.dll and

with original content when a


WindowsBase.dll in the references and the corresponding using
statements to the code:

document is opened in Word. using DocumentFormat.OpenXml.Packaging;


using DocumentFormat.OpenXml.Wordprocessing;

The next step is to implement the logic for saving the merged
name of the Web Part you just created and deployed), and click OK document to the library as a new document. This requires a bit of
(see Figure 4). Figure 5 shows the page with the Web Part in place. effort, but you can make it easier by using the SharePoint Managed
Save the page and close SharePoint Designer. Client Object Model. You’ll need to add two references to the

Merging the Reports


Now, let’s add the logic to merge the
uploaded documents in the doc-
ument library. For simplicity, this
code will merge all the documents
uploaded to this folder into a sin-
gle file. A more realistic approach
would be to merge only selected
items or only items uploaded in a
specified time period. You could
also save the merged document
to a different location or different
library. This is when we’ll add the
implementation to our OnSubmit-
Click handler of our VisualWeb-
Part project in Visual Studio 2010.
In the OnSubmitClick han-
dler of the Web Part, you need Figure 5 The Web Part in Place on the Page
54 msdn magazine Office Services
Untitled-2 1 6/8/10 10:19 AM
Figure 6 Creating a New File for the Merged Report With the new Word Automation Services in SharePoint 2010,
// String containing the blank document part for our new DOCX
this task can be performed programmatically using ConversionJob
string strEmptyMainPart = class. This class is part of the Microsoft.Office.Word.Server.dll
"<?xml version='1.0' encoding='UTF-8' standalone='yes'?>" +
"<w:document xmlns:w='http://schemas.openxmlformats.org/
assembly, so add the reference to this assembly to the project
wordprocessingml/2006/main'>" + manually. Once you’ve added this reference, you can use the code
"<w:body><w:p><w:r><w:t></w:t></w:r></w:p></w:body></w:document>";
in Figure 7 to perform conversion of the altChunks.
// In-memory stream for our consolidated DOCX. See the code download for this article for additional details of the
MemoryStream memOut = new MemoryStream();
solution, which you can use as the basis of your own reporting system.
// Output document's OpenXML object
WordprocessingDocument outputDoc =
WordprocessingDocument.Create(memOut,
Final Steps
DocumentFormat.OpenXml.WordprocessingDocumentType.Document); In order to test this code, we modified our SharePoint server’s
MainDocumentPart mainPart = outputDoc.AddMainDocumentPart();
configuration to run the Automation Service after one minute of
getting a run request. By default, this interval is set to five minutes,
Stream partStream = mainPart.GetStream();
UTF8Encoding encoder = new UTF8Encoding();
and we didn’t want to wait that long for our conversion to happen.
If you’d like to change this setting, you can set it in SharePoint
// Add blank main part string to the newly created document
Byte[] buffer = encoder.GetBytes(strEmptyMainPart);
Central Administration under Application Management | Manage
partStream.Write(buffer, 0, buffer.Length); Service Applications | Word Automation Services, and set the
// Save the document in memory
Frequency to start conversions under Conversion Throughput
mainPart.Document.Save(); to one minute.
The final generated report contains all the weekly status reports
you created, merged into a single new document with each of the
project, Microsoft.SharePoint.Client.dll and Microsoft.SharePoint. individual reports stacked one after the other.
Client.Runtime.dll, which are found in the following folder: And that’s it. In a future article we’ll take the concept of server-
%ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\ISAPI side merging of document contents to the next level. We’ll show
Create a new document in the SharePoint library with this code: you how to implement a mail-merge type of scenario on the server
ClientContext clientContext = side, again using Office 2010, SharePoint 2010 and Visual Studio
new ClientContext(SPContext.Current.Site.Url);
ClientOM.File.SaveBinaryDirect(clientContext, 2010. Until then, happy coding.
outputPath, memOut, true);
For these instructions to work, you’ll need the following using
statements in the source file:
using Microsoft.SharePoint.Client;
The final generated report
using ClientOM = Microsoft.SharePoint.Client;
contains all the weekly status
Making the Document Searchable
At this point you have the logic in place to generate fully func- reports you created, merged
tional consolidated documents on the server when a user clicks
the Merge Reports button. into a single new document with
However, there’s one small catch: the generated docu-
ment is not compatible with the SharePoint crawling mecha- each of the individual reports
nism because it contains OpenXML altChunk markup. This is
a by-product of merging the reports into the blank document stacked one after the other.
using the code we showed you earlier. The altChunks get replaced
with original content when a document is opened in Word.
For more information on Office 2010 and SharePoint 2010, see
Figure 7 Converting altChunks in the Merged Document the Office (msdn.microsoft.com/office) and SharePoint (msdn.microsoft.com/
sharepoint) developer centers. Information about Office OpenXML can be
string docPath = string.Format(@"{0}{1}",
SPContext.Current.Site.Url.Replace(@"\\", ""),
found at msdn.microsoft.com/library/bb448854, and you can read about Word
outputPath); Automation Services at msdn.microsoft.com/library/ee558278(v=office.14). „
ConversionJobSettings JobSettings =
new ConversionJobSettings(); MANVIR SINGH and ANKUSH BHATIA are part of the Visual Studio Developer
JobSettings.OutputFormat = SaveFormat.Document;
JobSettings.OutputSaveBehavior = Support Team in Microsoft Product Support Services (PSS), helping customers
SaveBehavior.AlwaysOverwrite; on programming issues involving Office client applications. You can reach Singh
at [email protected] or manvirsingh.net. You can reach Bhatia at
ConversionJob ConvJob = new ConversionJob( [email protected] or abhatia.wordpress.com.
"Word Automation Services", JobSettings);
ConvJob.UserToken = SPContext.Current.Site.UserToken;
ConvJob.AddFile(docPath, docPath);
ConvJob.Start();
THANKS to the following technical expert for reviewing this article:
Eric White
56 msdn magazine Office Services
1HHGD +$1'"
8VH &2'()/8(17(17,7,(6
$33/,&$7,21%/2&.6 6833257('$5&+,7(&785(6
‡/RFDOL]DWLRQ ‡62$6PDUW&OLHQW
‡'DWD%LQGLQJ ‡5LFK&OLHQW5,$6LOYHUOLJKW
‡5XOHVDQG9DOLGDWLRQ ‡:HE:HESDUWV
‡&RQFXUUHQF\ ‡&OLHQW6HUYHU17LHU
‡6HFXULW\ ‡2IILFH
‡&DFKLQJ ‡6KDUH3RLQW
‡%OREKDQGOLQJ ‡6DD6&ORXG

)($785('7(&+12/2*,(6
‡1(7 WR &/LQT ‡64/6HUYHU WR
‡$631(7 :HE)RUPV09& ‡2UDFOH'DWDEDVH WR
‡6LOYHUOLJKW WR ‡2IILFH WR
‡:3):LQ)RUPV ‡6KDUH3RLQW WR
‡:&)$60; ‡9LVXDO6WXGLR WR

&86720(5$33529('02'(/'5,9(162)7:$5()$&725<
:HXQGHUVWDQGWKHFKDOOHQJHVWKDWFRPHZLWKWRGD\¶VDQGWRPRUURZ¶VWHFKQRORJ\LQWHJUDWLRQDQGHYROXWLRQ
&RGH)OXHQW(QWLWLHVLVDIXOO\LQWHJUDWHG0RGHO'ULYHQ6RIWZDUH)DFWRU\ZKLFKSURYLGHVDUFKLWHFWVDQGGHYHORSHUVDVWUXFWXUHG
PHWKRGDQGWKHFRUUHVSRQGLQJWRROVWRGHYHORS1(7DSSOLFDWLRQVEDVHGRQDQ\W\SHRIDUFKLWHFWXUHIURPDQHYHUFKDQJLQJ
EXVLQHVVPRGHODQGUXOHVDWDQXQSUHFHGHQWHGSURGXFWLYLW\OHYHO

&RGH)OXHQW(QWLWLHVLVEDVHGRQDSOXJJDEOHSURGXFHUORJLFZKLFKIURPDGHFODUDWLYHPRGHO\RXGHVLJQHGFRQWLQXRXVO\JHQHUDWHV
UHDG\WRXVHVWDWHRIWKHDUWVFDODEOHKLJKSHUIRUPDQFHDQGHDVLO\GHEXJJDEOHVRXUFHFRGHFRPSRQHQWVDQGHYHU\WKLQJ\RXQHHG
'RZQORDG\RXU)5((WULDOWRGD\DW
ZZZ&RGH)OXHQW(QWLWLHVFRP0VGQ

&RQWDFWLQIR#VRIWIOXHQWFRP
ZZZ&RGH)OXHQW(QWLWLHVFRP

Untitled-5 1 6/7/10 12:05 PM


SMART CLIENT

Building Distributed
Apps with NHibernate
and Rhino Service Bus
Oren Eini

For a long time, I dealt almost exclusively in Web applications. bandwidth limitations of data access over the intranet or Internet.
When I moved over to build a smart client application, at first I was You’re also responsible for synchronizing data between front-end
at quite a loss as to how to approach building such an application. and back-end systems, distributed change-tracking, and handling
How do I handle data access? How do I communicate between the the issues of working in an occasionally connected environment.
smart client application and the server? A smart client application, as discussed in this article, can be
Furthermore, I already had a deep investment in an existing toolset built with either Windows Presentation Foundation (WPF) or
that drastically reduced the time and cost for development, and I Silverlight. Because Silverlight exposes a subset of WPF features,
really wanted to be able to continue using those tools. It took me the techniques and approaches I outline here are applicable to both.
a while to figure out the details to my satisfaction, and during that In this article, I start the processes of planning and building
time, I kept thinking how much simpler a Web app would be—if a smart client application using NHibernate for data access and
only because I knew how to handle such apps already. Rhino Service Bus for reliable communication with the server.
There are advantages and disadvantages to smart client applica- The application will function as the front end for an online lending
tions. On the plus side, smart clients are responsive and promote library, which I called Alexandra. The application itself is split into
interactivity with the user. You also reduce server load by moving two major pieces. First, there’s an application server running a set
processing to a client machine, and enable users to work even while of services (where most of the business logic will reside), accessing
disconnected from back-end systems. the database using NHibernate. Second, the smart client UI will
On the other hand, there are the challenges inherent in such make exposing those services to the user easy.
smart clients, including contending with the speed, security, and NHibernate (nhforge.org) is an object-relational mapping (O/RM)
framework designed to make it as easy to work with relational
This article discusses: databases as it is to work with in-memory data. Rhino Service
Bus (github.com/rhino-esb/rhino-esb) is an open source service bus
• Distribution of responsibilities
implementation built on the Microsoft .NET Framework, focusing
• Fallacies of distributed computing
primarily on ease of development, deployment and use.
• Queues and disconnected operation
• Session and transaction management Distribution of Responsibilities
Technologies discussed: The first task in building the lending library is to decide on the
NHibernate, Rhino Service Bus
proper distribution of responsibility between the front-end and
back-end systems. One path is to focus the application primar-
58 msdn magazine
ily on the UI so that most of the processing is done on the client Smart Client
machine. In this case the back end serves mostly as a data repository.
In essence, this is just a repetition of the traditional client/server
Application Server
application, with the back end serving as a mere proxy for the data NHibernate &
Database
store. This is a valid design choice if the back-end system is just a Rhino Service Bus
data repository. A personal book catalog, for example, might benefit
from such architecture, because the behavior of the application is
limited to managing data for the users, with no manipulation of
Figure 1 The Application’s Architecture
the data on the server side.
For such applications, I recommend making use of WCF RIA Services Considering the fact that the main source for remote calls in
orWCFDataServices.If you want the back-end server to expose a CRUD most Web applications is a database or another application server
interface for the outside world, then leveraging WCF RIA Services located in the same datacenter (and often in the same rack), this is
or WCF Data Services allows you to drastically cut down the time a drastic change with several implications.
required to build the application. But while both technologies Intranet and Internet connections suffer from issues of speed,
let you add your own business logic to the CRUD interface, any bandwidth limitations and security. The vast difference in the costs
attempt to implement significant application behavior using this of communication dictate a different communication structure
approach would likely result in an unmaintainable, brittle mess. than the one you’d adopt if all the major pieces in the application
I won’t cover building such an application in this article, but Brad were residing in the same datacenter.
Adams has shown a step-by-step approach for building just such Among the biggest hurdles you have to deal with in distributed
an application using NHibernate and WCF RIA Services on his applications are the fallacies of distributed computing. These are
blog at blogs.msdn.com/brada/archive/2009/08/06/business-apps-example-for- a set of assumptions that developers tend to make when building
silverlight-3-rtm-and-net-ria-services-july-update-part-nhibernate.aspx. distributed applications, which ultimately prove false. Relying
Going all the way to the other extreme, you can choose to on these false assumptions usually results in reduced capabilities
implement most of the application behavior on the back end, or a very high cost to redesign and rebuild the system. There are
leaving the front end with purely presentation concerns. While eight fallacies:
this seems reasonable at first, because this is how you typically write • The network is reliable.
Web-based applications, it means that you can’t take advantage of • Latency is zero.
running a real application on the client side. State management • Bandwidth is infinite.
would be harder. Essentially you’re back writing a Web application, • The network is secure.
with all the complexities this entails. You won’t be able to shift pro- • Topology doesn’t change.
cessing to the client machine and you won’t be able to handle in- • There is one administrator.
terruptions in connectivity. • Transport cost is zero.
Worse, from the user perspective, this approach means that you • The network is homogeneous.
present a more sluggish UI since all actions require a roundtrip to Any distributed application that doesn’t take these fallacies into
the server. account is going to run into sever problems. A smart client appli-
I’m sure it won’t surprise you that the approach I’m taking in this cation needs to deal with those issues head on. The use of caching
example is somewhere in the middle. I’m going to take advantage is a topic of great importance in such circumstances. Even if you
of the possibilities offered by running on the client machine, but aren’t interested in working in a disconnected fashion, a cache is
at the same time significant parts of the application run as services almost always useful for increasing application responsiveness.
on the back end, as shown in Figure 1.
The sample solution is composed of three projects, which you can
download from github.com/ayende/alexandria. Alexandria.Backend is a
console application that hosts the back-end code. Alexandria.Client
Intranet and Internet
contains the front-end code, and Alexandria.Messages contains the
message definitions shared between them. To run the sample, both
connections suffer from issues of
Alexandria.Backend and Alexandria.Client need to be running.
One advantage of hosting the back end in a console application
speed, bandwidth and security.
is that it allows you to easily simulate disconnected scenarios by
simply shutting down the back-end console application and starting Another aspect you need to consider is the communication
it up at a later time. model for the application. It may seem that the simplest model
is a standard service proxy that allows you to perform remote
Fallacies of Distributed Computing procedure calls (RPCs), but this tends to cause problems down
With the architectural basics in hand, let’s take a look at the impli- the road. It leads to more-complex code to handle a disconnected
cations of writing a smart client application. Communication with state and requires you to explicitly handle asynchronous calls if
the back end is going to be through an intranet or the Internet. you want to avoid blocking in the UI thread.
msdnmagazine.com July 2010 59
Single Request local cache because it is acceptable to show the user
User Interface
cached data while the application requests fresh data
MyBooks
Query from the back-end system. Other applications—stock
trading, for example—should probably show nothing
MyQueue
Query
at all rather than stale data.

Recommendations
Query
Disconnected Operations
The next problem you have to face is handling dis-
Local Cache Subscription connected scenarios. In many applications, you can
Details Query
specify that a connection is mandatory, which means
you can simply show the user an error if the back-end
Figure 2 A Single Request to the Server Contains Several Messages servers are unavailable. But one benefit of a smart client
application is that it can work in a disconnected
Back-End Basics manner, and the Alexandria application takes full advantage of that.
Next, there’s the problem of how to structure the back end of the However, this means the cache becomes even more important
application in a way that provides both good performance and a because it’s used both to speed communication and to serve data
degree of separation from the way the UI is structured. from the cache if the back-end system is unreachable.
The ideal scenario from a performance and responsiveness By now, I believe you have a good understanding of the challenges
perspective is to make a single call to the back end to get all the involved in building such an application, so let’s move on to see
data you need for the presented screen. The problem with going how to solve those challenges.
this route is that you end up with a service interface that mimics
the smart client UI exactly. This is bad for a whole host of reasons. Queues Are One of My Favorite Things
Mainly, the UI is the most changeable part in an application. In Alexandria, there’s no RPC communication between the front
Tying the service interface to the UI in this fashion results in end and the back end. Instead, as shown in Figure 3, all commu-
frequent changes to the service, driven by purely UI changes. nication is handled via one-way messages going through queues.
Queues provide a rather elegant way of solving the communication

Let’s make use of a local cache


issues identified earlier. Instead of communicating directly between the
front end and the back end (which means supporting disconnected

and a message-oriented
scenarios is hard), you can let the queuing subsystem handle all of that.
Using queues is quite simple. You ask your local queuing sub-

communication model.
system to send a message to some queue. The queuing subsystem
takes ownership of the message and ensures that it reaches its
destination at some point. Your application, however, doesn’t wait for
That, in turn, means deployment of the application just got a lot the message to reach its destination and can carry on doing its work.
harder. You have to deploy both the front end and the back end at If the destination queue is not currently available, the queuing
the same time, and trying to support multiple versions at the same subsystem will wait until the destination queue becomes available
time is likely to result in greater complexity. In addition, the service again, then deliver the message. The queuing subsystem usually
interface can’t be used to build additional UIs or as an integration
point for third-party or additional services. User Interface
If you try going the other route—building a standard, fine-grained
interface—you’ll run head on into the fallacies (a fine-grained
interface leads to a high number of remote calls, resulting in issues
with latency, reliability and bandwidth). Application Server
NHibernate &
The answer to this challenge is to break away from the common Rhino Service Bus
RPC model. Instead of exposing methods to be called remotely, let’s
use a local cache and a message-oriented communication model.
Figure 2 shows how you pack several requests from the front
end to the back end. This allows you to make a single remote call,
but keep a programming model on the server side that isn’t tightly
coupled to the needs of the UI. Queue Queue
To increase responsiveness, you can include a local cache that can an-
swer some queries immediately, leading to a more-responsive application.
One of the things you have to consider in these scenarios is what
types of data you have and the freshness requirements for any data
you display. In the Alexandria application, I lean heavily on the Figure 3 The Alexandria Communication Model

60 msdn magazine Smart Client


persists the message to disk until it’s delivered, so pending Figure 4 Consuming a Query on the Back-End System
messages will still arrive at their destination even if the source public class MyBooksQueryConsumer :
machine has been restarted. ConsumerOf<MyBooksQuery> {
When using queues, it’s easy to think in terms of messages and private readonly ISession session;
destinations. A message arriving at a back-end system will trigger private readonly IServiceBus bus;
some action, which may then result in a reply sent to the original public MyBooksQueryConsumer(
sender. Note that there’s no blocking on either end, because each ISession session, IServiceBus bus) {
system is completely independent. this.session = session;
Queuing subsystems include MSMQ, ActiveMQ, RabbitMQ, and this.bus = bus;
}
others. The Alexandria application uses Rhino Queues (github.com/
rhino-queues/rhino-queues), an open source, xcopy-deployed queuing sub- public void Consume(MyBooksQuery message) {
var user = session.Get<User>(message.UserId);
system. I chose Rhino Queues for the simple reason that it requires
no installation or administration, making it ideal for use in samples Console.WriteLine("{0}'s has {1} books at home",
user.Name, user.CurrentlyReading.Count);
and in applications that you need to deploy to many machines. It’s
also worth noting that I wrote Rhino Queues. I hope you like it. bus.Reply(new MyBooksResponse {
UserId = message.UserId,
Timestamp = DateTime.Now,
Putting Queues to Work Books = user.CurrentlyReading.ToBookDtoArray()
});
Let’s see how you can handle getting the data for the main screen }
using queues. Here’s the ApplicationModel initialization routine: }
protected override void OnInitialize() {
bus.Send(
new MyBooksQuery { UserId = userId }, single, general message such as MainWindowQuery, I send many
new MyQueueQuery { UserId = userId }, messages, (MyBooksQuery, MyQueueQuery, and so on), each for
new MyRecommendationsQuery { UserId = userId },
new SubscriptionDetailsQuery { UserId = userId }); a very specific piece of information. As discussed previously, this
} allows you to benefit both from sending several messages in a single
I’m sending a batch of messages to the server, requesting several batch (reducing network roundtrips) and reducing the coupling
pieces of information. There are a number of things to notice here. between the front end and the back end.
The granularity of the messages sent is high. Rather than sending a Note that all of the messages end with the term Query. This is a
simple convention I use to denote pure query messages that change
RPC Is Thy Enemy
no state and expect some sort of response.
Finally, notice that I don’t seem to be getting any sort of reply
One of the most common mistakes in building a distributed from the server. Because I’m using queues, the mode of communication
application is to ignore the distribution aspect of the application. is fire and forget. I fire off a message (or a batch of messages) now,
WCF, for example, makes it easy to ignore the fact that you’re mak- and I deal with the replies at a later stage.
ing a method call over the network. While that’s a very simple pro- Before looking at how the front end deals with the responses,
gramming model, it means you need to be extremely careful not to let’s see how the back end processes the messages I just sent.
violate one of the fallacies of distributed computing. Figure 4 shows how the back-end server consumes a query for
Indeed, it’s the very fact that the programming model offered books. And here, for the first time, you can see how I use both
by frameworks such as WCF is so similar to the one you use for NHibernate and Rhino Service Bus.
calling methods on the local machine that leads you to make But before diving into the actual code that handles this message,
those false assumptions.
let’s discuss the structure in which this code is running.
A standard RPC API means blocking when making a call over
the network, higher cost for each remote method call and the
potential for failure if the back-end server is not available. It’s
It’s All About Messages
certainly possible to build a good distributed application on top Rhino Service Bus (hibernatingrhinos.com/open-source/rhino-service-bus) is,
of this foundation, but it takes greater care. unsurprisingly, a service bus implementation. It’s a communication
Taking a different approach leads you to a programming model framework based on a one-way queued message exchange,
based on explicit message exchanges (as opposed to the implicit heavily inspired by NServiceBus (nservicebus.com).
message exchanges common in most SOAP-based RPC stacks). A message sent on the bus will arrive at its destination queue,
That model may look strange at first, and it does require you where a message consumer will be invoked. That message consumer
to shift your thinking a bit, but it turns out that by making this in Figure 4 is MyBooksQueryConsumer. A message consumer is
shift, you significantly reduce the amount of complexity to worry a class that implements ConsumerOf<TMsg>, and the Consume
about overall. method is invoked with the appropriate message instance to
My example Alexandria application is built on top of a one-way
handle the message.
messaging platform, and it makes full use of this platform so the
You can probably surmise from the MyBooksQueryConsumer
application is aware of the fact it’s distributed and actually takes
advantage of that. constructor that I’m using an Inversion of Control (IoC) container
to supply dependencies for the message consumer. In the case of
msdnmagazine.com July 2010 61
Figure 5 Initializing Messaging Sessions asynchronous. At no point are you waiting for a reply from the
public class AlexandriaBootStrapper :
back end, and you aren’t using the .NET Framework’s asynchronous
AbstractBootStrapper { API. Instead, you have an explicit message exchange, which usually
public AlexandriaBootStrapper() {
happens almost instantaneously, but can also stretch over a longer
NHibernateProfiler.Initialize(); time period if you’re working in a disconnected mode.
}
Earlier, when I sent the queries to the back end, I just told the bus to
protected override void ConfigureContainer() { send the messages, but I didn’t say where to send them. In Figure 4,
var cfg = new Configuration()
.Configure("nhibernate.config");
I just called Reply, again not specifying where the message should
var sessionFactory = cfg.BuildSessionFactory(); be sent. How does the bus know where to send those messages?
container.Kernel.AddFacility(
In the case of sending messages to the back end, the answer is: con-
"factory", new FactorySupportFacility()); figuration. In the App.config, you’ll find the following configuration:
<messages>
container.Register( <add name="Alexandria.Messages"
Component.For<ISessionFactory>() endpoint="rhino.queues://localhost:51231/alexandria_backend"/>
.Instance(sessionFactory), </messages>
Component.For<IMessageModule>()
.ImplementedBy<NHibernateMessageModule>(), This tells the bus that all messages whose namespace starts
Component.For<ISession>() with Alexandria.Messages should be sent to the alexandria_
.UsingFactoryMethod(() =>
NHibernateMessageModule.CurrentSession) backend endpoint.
.LifeStyle.Is(LifestyleType.Transient)); In the handling of the messages in the back-end system, calling
base.ConfigureContainer(); Reply simply means sending the message back to its originator.
} This configuration specifies the ownership of a message, that
}
is, to whom to send this message when it’s placed on the bus and
where to send a subscription request so you’ll be included in the
MyBooksQueryConsumer, those dependencies are the bus itself distribution list when messages of this type are published. I’m
and the NHibernate session. not using message publication in the Alexandria application, so
The actual code for consuming the message is straightforward. I won’t cover that.
You get the appropriate user from the NHibernate session and send
a reply back to the message originator with the requested data. Figure 6 Managing Session Lifetime
The front end also has a message consumer. This consumer is
public class NHibernateMessageModule : IMessageModule {
for MyBooksResponse: private readonly ISessionFactory sessionFactory;
public class MyBooksResponseConsumer : [ThreadStatic]
ConsumerOf<MyBooksResponse> { private static ISession currentSession;

private readonly ApplicationModel applicationModel; public static ISession CurrentSession {


get { return currentSession; }
public MyBooksResponseConsumer( }
ApplicationModel applicationModel) {
this.applicationModel = applicationModel; public NHibernateMessageModule(
} ISessionFactory sessionFactory) {

public void Consume(MyBooksResponse message) { this.sessionFactory = sessionFactory;


applicationModel.MyBooks.UpdateFrom(message.Books); }
}
} public void Init(ITransport transport,
IServiceBus serviceBus) {

When using queues, it’s easy


transport.MessageArrived += TransportOnMessageArrived;
transport.MessageProcessingCompleted
+= TransportOnMessageProcessingCompleted;

to think in terms of messages


}

private static void

and destinations.
TransportOnMessageProcessingCompleted(
CurrentMessageInformation currentMessageInformation,
Exception exception) {

if (currentSession != null)
This simply updates the application model with the data from currentSession.Dispose();
the message. One thing, however, should be noted: the consume currentSession = null;
}
method is not called on the UI thread. Instead, it’s called on a back-
ground thread. The application model is bound to the UI, however, private bool TransportOnMessageArrived(
CurrentMessageInformation currentMessageInformation) {
so updating it must happen on the UI thread. The UpdateFrom
method is aware of that and will switch to the UI thread to update if (currentSession == null)
currentSession = sessionFactory.OpenSession();
the application model in the correct thread. return false;
The code for handling the other messages on both the back }
}
end and the front end is similar. This communication is purely
62 msdn magazine Smart Client
Imagine...
...an intranet employees want to use

Why is user adoption


such a large hurdle for intranets? Learn more:
eIntranet overcomes this hurdle by transforming the user
experience. Employees connect with the right people
and content instantly. Information finds them, no matter
where they go.

„ Collaboration – Complete projects faster in collaborative http://www.ektron.com/intranet


groupspaces with powerful communication and sharing tools

„ Timeline and Social Navigation – Find content and collateral


based on when it was created and who is using it

„ Easy to deploy, customize and extend – Integrate with business


infrastructures and extend the functionality to meet unique needs

„ Mobile engagement – Engage employees on the go, delivering


updates via SMS alerts, e-mail or the eIntranet Mobile App

Copyright © 2010 Ektron, Inc. All rights reserved. | http://www.ektron.com | 1-877-4-WEB-CMS

Untitled-1 1 6/11/10 11:48 AM


The answer is that transactions
are handled by Rhino Service Bus.
Instead of making each consumer
manage its own transactions,
Rhino Service Bus takes a different
approach. It makes use of System.
Figure 7 The NHibnerate Profiler View of Processing Requests
Transactions.TransactionScope to
create a single transaction that
Session Management encompasses all the consumers for messages in the batch.
You’ve seen how the communication mechanism works now, but That means all the actions taken in a response to a message batch
there are infrastructure concerns to address before moving forward. (as opposed to a single message) are part of the same transaction.
As in any NHibernate application, you need some way of managing NHibernate will automatically enlist a session in the ambient trans-
the session lifecycle and handling transactions properly. action, so when you’re using Rhino Service Bus you have no need
The standard approach for Web applications is to create a session to explicitly deal with transactions.
per request, so each request has its own session. For a messaging

The standard approach for


application, the behavior is almost identical. Instead of having a
session per request, you have a session per message batch.

Web applications is to create a


It turns out that this is handled almost completely by the infrastruc-
ture. Figure 5 shows the initialization code for the back-end system.

session per request.


Bootstrapping is an explicit concept in Rhino Service Bus,
implemented by classes deriving from AbstractBootStrapper. The
bootstrapper has the same job as the Global.asax in a typical Web
application. In Figure 5 I first build the NHibernate session factory, The combination of a single session and a single transaction makes
then set up the container (Castle Windsor) to provide the NHiber- it easy to combine multiple operations into a single transactional unit.
nate session from the NHibenrateMessageModule. It also means you can directly benefit from NHibernate’s first-level
A message module has the same purpose as an HTTP module cache. For example, here’s the relevant code to handle MyQueueQuery:
in a Web application: to handle cross-cutting concerns across all public void Consume(MyQueueQuery message) {
var user = session.Get<User>(message.UserId);
requests. I use the NHibernateMessageModule to manage the
session lifetime, as shown in Figure 6. Console.WriteLine("{0}'s has {1} books queued for reading",
user.Name, user.Queue.Count);
The code is pretty simple: register for the appropriate events, create
and dispose of the session in the appropriate places and you’re done. bus.Reply(new MyQueueResponse {
UserId = message.UserId,
One interesting implication of this approach is that all messages Timestamp = DateTime.Now,
in a batch will share the same session, which means that in many Queue = user.Queue.ToBookDtoArray()
});
cases you can take advantage of NHibernate’s first-level cache. }
The actual code for handling a MyQueueQuery and MyBooks-
Transaction Management Query is nearly identical. So, what’s the performance implication
That’s it for session management, but what about transactions? of a single transaction per session for the following code?
A best practice for NHibernate is that all interactions with the bus.Send(
database should be handled via transactions. But I’m not using new MyBooksQuery {
UserId = userId
NHibernate’s transactions here. Why? },
new MyQueueQuery {
UserId = userId
});
Client Cache Back End At first glance, it looks like it would take four queries to gather
all the required information. In MyBookQuery, one query to get
MyBooksRequest the appropriate user and another to load the user’s books. The same
MyBooksRequest appears to be the case in MyQueueQuery: one query to get the user
and another to load the user’s queue.
Cached MyBooksResponse The use of a single session for the entire batch, however, shows
MyBooksResponse
that you’re actually using the first-level cache to avoid unnecessary
queries, as you can see in the NHibernate Profiler (nhprof.com)
MyBooksResponse output in Figure 7.

Client Cache Back End Supporting Occasionally Connected Scenarios


As it stands, the application won’t throw an error if the back-end
Figure 8 Using the Cache in Concurrent Messaging Operations server can’t be reached, but it wouldn’t be very useful, either.
64 msdn magazine Smart Client
Gantt Chart

You have the vision, but time, budget and staff


constraints prevent you from seeing it through.
With rich user interface controls like Gantt Charts
that Infragistics NetAdvantage® for .NET adds to
your Visual Studio 2010 toolbox, you can go to market
faster with extreme functionality, complete usability
and the “Wow-factor!” Go to infragistics.com/spark
now to get innovative controls for creating Killer Apps.

Infragistics Sales 800 231 8588


Infragistics Europe Sales +44 (0) 800 298 9055
Infragistics India +91-80-6785-1111
twitter.com/infragistics

Copyright 1996-2010 Infragistics, Inc. All rights reserved. Infragistics, the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc. All other trademarks or registered trademarks are the property of their respective owner(s).

Untitled-12 1 4/9/10 2:29 PM


Figure 9 Caching Incoming Connections ization to save the values to disk) and define the following conventions:
private bool TransportOnMessageArrived(
• A message can be cached if it’s part of a request/response
CurrentMessageInformation message exchange.
currentMessageInformation) {
• Both the request and response messages carry the cache key
var cachableResponse = for the message exchange.
currentMessageInformation.Message as
ICacheableResponse;
The message exchange is defined by an ICacheableQuery
if (cachableResponse == null) interface with a single Key property and an ICacheableResponse
return false;
interface with Key and Timestamp properties.
var alreadyInCache = cache.Get(cachableResponse.Key); To implement this convention, I write a CachingMessageModule
if (alreadyInCache == null ||
alreadyInCache.Timestamp <
that will run on the front end, intercepting incoming and outgoing
cachableResponse.Timestamp) { messages. Figure 9 shows how incoming messages are handled.
cache.Put(cachableResponse.Key,
There isn’t much going on here—if the message is a cacheable
cachableResponse.Timestamp, cachableResponse); response, I put it in the cache. But there is one thing to note:
}
return false;
I handle the case of out-of-order messages—messages that have an
} earlier timestamp arriving after messages with later timestamps.
This ensures that only the latest information is stored in the cache.
The next step in the evolution of this application is to turn it into Handling outgoing messages and dispatching the messages from
a real occasionally connected client by introducing a cache that the cache is more interesting, as you can see in Figure 10.
allows the application to continue operating even if the back-end I gather the cached responses from the cache and call Consume-
server is not responding. However, I won’t use the traditional cach- Messages on them. That causes the bus to invoke the usual message
ing architecture in which the application code makes explicit calls invocation logic, so it looks like the message has arrived again.
to the cache. Instead, I’ll apply the cache at the infrastructure level.

Even though there’s a


Figure 8 shows the sequence of operations when the cache is
implemented as part of the messaging infrastructure when you send

cached response, you still


a single message requesting data about a user’s books.
The client sends a MyBooksQuery message. The message is sent

send the message.


on the bus while, at the same time, the cache is queried to see if it
has the response for this request. If the cache contains the response
for the previous request, the cache immediately causes the cached
message to be consumed as if it just arrived on the bus.
The response from the back-end system arrives. The mes- Note, however, that even though there’s a cached response, you
sage is consumed normally and is also placed in the cache. still send the message. The reasoning is that you can provide a quick
On the surface, this approach seems to be complicated, but it (cached) response for the user, and update the information shown
results in effective caching behavior and allows you to almost to the user when the back end replies to new messages.
completely ignore caching concerns in the application code. With
a persistent cache (one that survives application restarts), you can Next Steps
operate the application completely independently without requiring I have covered the basic building blocks of a smart client application:
any data from the back-end server. how to structure the back end and the communication mode
Now let’s implement this functionality. I assume a persistent cache (the between the smart client application and the back end. The latter
sample code provides a simple implementation that uses binary serial- is important because choosing the wrong communication mode
can lead to the fallacies of distributed computing. I also touched
Figure 10 Dispatching Messages on batching and caching, two very important approaches to
improving the performance of a smart client application.
private void TransportOnMessageSent(
CurrentMessageInformation
On the back end, you’ve seen how to manage transactions and the
currentMessageInformation) { NHibernate session, how to consume and reply to messages from
var cacheableQuerys =
the client and how everything comes together in the bootstrapper.
currentMessageInformation.AllMessages.OfType< In this article, I focused primarily on infrastructure concerns;
ICacheableQuery>();
var responses =
in the next installment I’ll cover best practices for sending data
from msg in cacheableQuerys between the back end and the smart client application, and
let response = cache.Get(msg.Key)
where response != null
patterns for distributed change management. „
select response.Value;

var array = responses.ToArray(); OREN EINI (who works under the pseudonym Ayende Rahien) is an active member of
if (array.Length == 0) several open source projects (NHibernate and Castle among them) and is the founder
return; of many others (Rhino Mocks, NHibernate Query Analyzer and Rhino Commons
bus.ConsumeMessages(array); among them). Eini is also responsible for the NHibernate Profiler (nhprof.com), a
}
visual debugger for NHibernate. You can follow Eini’s work at ayende.com/blog.
66 msdn magazine Smart Client
Untitled-3 1 6/8/10 11:39 AM
C# 4.0

New C# Features in
the .NET Framework 4
Chris Burrows

Since its initial release in 2002, the C# programming lan- IEnumerable<T> and IEnumerator <T> represent, respectively,
guage has been improved to enable programmers to write clearer, an object that’s a sequence of T’s and the enumerator (or iterator) that
more maintainable code. The enhancements have come from the does the work of iterating the sequence. These interfaces have done
addition of features such as generic types, nullable value types, lambda a lot of heavy lifting for a long time, because they support the imple-
expressions, iterator methods, partial classes and a long list of other use- mentation of the foreach loop construct. In C# 3.0, they became even
ful language constructs. And, often, the changes were accompanied by more prominent because of their central role in LINQ and LINQ to
giving the Microsoft .NET Framework libraries corresponding support. Objects—they’re the .NET interfaces to represent sequences.
This trend toward increased usability continues in C# 4.0. The addi- So if you have a class hierarchy with, say, an Employee type and
tions make common tasks involving generic types, legacy interop and a Manager type that derives from it (managers are employees, after
working with dynamic object models much simpler. This article aims all), then what would you expect the following code to do?
to give a high-level survey of these new features. I’ll begin with generic IEnumerable<Manager> ms = GetManagers();
IEnumerable<Employee> es = ms;
variance and then look at the legacy and dynamic interop features.
It seems as though one ought to be able to treat a sequence of
Covariance and Contravariance Managers as though it were a sequence of Employees. But in C# 3.0, the
assignment will fail; the compiler will tell you there’s no conversion. After
Covariance and contravariance are best introduced with an example,
all, it has no idea what the semantics of IEnumerable<T> are. This could
and the best is in the framework. In System.Collections.Generic,
be any interface, so for any arbitrary interface IFoo<T>, why would an
IFoo<Manager> be more or less substitutable for an IFoo<Employee>?
This article discusses: In C# 4.0, though, the assignment works because IEnumerable<T>,
along with a few other interfaces, has changed, an alteration
• Covariance and contravariance
enabled by new support in C# for covariance of type parameters.
• Dynamic dispatch
IEnumerable<T> is eligible to be more special than the arbitrary
• Named arguments and optional properties IFoo<T> because, though it’s not obvious at first glance, members
• COM interop that use the type parameter T (GetEnumerator in IEnumerable<T>
Technologies discussed: and the Current property in IEnumerator<T>) actually use T only
C#, Microsoft .NET Framework 4, COM
in the position of a return value. So you only get a Manager out of
the sequence, and you never put one in.
68 msdn magazine
In contrast, think of List<T>. Making a List<Manager> substitutable So the language feature here is pretty simple to summarize:
for a List<Employee> would be a disaster, because of the following: You can add the keyword in or out whenever you define a type
List<Manager> ms = GetManagers(); parameter, and doing so gives you free extra conversions. There
List<Employee> es = ms; // Suppose this were possible
es.Add(new EmployeeWhoIsNotAManager()); // Uh oh are some limitations, though.
As this shows, once you think you’re looking at a List<Employee>, First, this works with generic interfaces and delegates only. You
you can insert any employee. But the list in question is actually a can’t declare a generic type parameter on a class or struct in this
List<Manager>, so inserting a non-Manager must fail. You’ve lost manner. An easy way to rationalize this is that delegates are very
type safety if you allow this. List<T> cannot be covariant in T. much like interfaces that have just one method, and in any case,
The new language feature in C# 4.0, then, is the ability to define classes would often be ineligible for this treatment because of fields.
types, such as the new IEnumerable<T>, that admit conversions You can think of any field on the generic class as being both an
among themselves when the type parameters in question bear input and an output, depending on whether you write to it or read
some relationship to one another. This is what the .NET Framework from it. If those fields involve type parameters, the parameters can
developers who wrote IEnumerable<T> used, and this is what their be neither covariant nor contravariant.
code looks like (simplified, of course): Second, whenever you have an interface or delegate with a
public interface IEnumerable<out T> { /* ... */ } covariant or contravariant type parameter, you’re granted new con-
Notice the out keyword modifying the definition of the type versions on that type only when the type arguments, in the usage of
parameter, T. When the compiler sees this, it will mark T as the interface (not its definition), are reference types. For instance,
covariant and check that, in the definition of the interface, all because int is a value type, the IEnumerator<int> doesn’t convert
uses of T are up to snuff (in other words, that they’re used in out to IEnumerator <object>, even though it looks like it should:
positions only—that’s why this keyword was picked). IEnumerator <int> J/ IEnumerator <object>
Why is this called covariance? Well, it’s easiest to see when The reason for this behavior is that the conversion must
you start to draw arrows. To be concrete, let’s use the Manager preserve the type representation. If the int-to-object conversion
and Employee types. Because there’s an inheritance relationship were allowed, calling the Current property on the result would be
between these classes, there’s an implicit reference conversion from impossible, because the value type int has a different representation
Manager to Employee: on the stack than an object reference does. All reference types
Manager J Employee have the same representation on the stack, however, so only type
And now, because of the annotation of T in IEnumerable<out T>, there’s arguments that are reference types yield these extra conversions.
also an implicit reference conversion from IEnumerable<Manager> Very likely, most C# developers will happily use this new lan-
to IEnumerable<Employee>. That’s what the annotation provides for: guage feature—they’ll get more conversions of framework types and
fewer compiler errors when using some types from the .NET
IEnumerable<Manager> J IEnumerable<Employee>
Framework (IEnumerable<T>, IComparable<T>, Func<T>,
This is called covariance, because the arrows in each of the Action<T>, among others). And, in fact, anyone designing a library with
two examples point in the same direction. We started with two generic interfaces and delegates is free to use the new in and out
types, Manager and Employee. We made new types out of them, type parameters when appropriate to make life easier for their users.
IEnumerable<Manager> and IEnumerable<Employee>. The new By the way, this feature does require support from the runtime—
types convert the same way as the old ones. but the support has always been there. It lay dormant for several
Contravariance is when this happens backward. You might releases, however, because no language made use of it. Also, previ-
anticipate that this could happen when the type parameter, T, is ous versions of C# allowed some limited conversions that were con-
used only as input, and you’d be right. For example, the System travariant. Specifically, they let you make delegates out of methods
namespace contains an interface called IComparable<T>, which that had compatible return types. In addition, array types have
has a single method called CompareTo: always been covariant. These existing features are distinct from the
public interface IComparable<in T> {
bool CompareTo(T other); new ones in C# 4.0, which actually let you define your own types that
} are covariant and contravariant in some of their type parameters.
If you have an IComparable<Employee>, you should be able to
treat it as though it were an IComparable<Manager>, because the Dynamic Dispatch
only thing you can do is put Employees in to the interface. Because On to the interop features in C# 4.0, starting with what is perhaps
a manager is an employee, putting a manager in should work, and the biggest change.
it does. The in keyword modifies T in this case, and this scenario C# now supports dynamic late-binding. The language has
functions correctly: always been strongly typed, and it continues to be so in version 4.0.
IComparable<Employee> ec = GetEmployeeComparer(); Microsoft believes this makes C# easy to use, fast and suitable for
IComparable<Manager> mc = ec;
all the work .NET programmers are putting it to. But there are times
This is called contravariance because the arrow got reversed
when you need to communicate with systems not based on .NET.
this time:
Traditionally, there were at least two approaches to this. The first
Manager J Employee was simply to import the foreign model directly into .NET as a proxy.
IComparable<Manager> I IComparable<Employee> COM Interop provides one example. Since the original release of the
msdnmagazine.com July 2010 69
.NET Framework, it has used this strategy with a tool called TLBIMP,
C# Dynamic IronPython IronRuby Dynamic APIs
which creates new .NET proxy types you can use directly from C#.
LINQ-to-SQL, shipped with C# 3.0, contains a tool called Rest of the
Dynamic Language Runtime (DLR) .NET Framework
SQLMETAL, which imports an existing database into C# proxy
classes for use with queries. You’ll also find a tool that imports
Common Language Runtime (CLR)
Windows Management Instrumentation (WMI) classes to C#.
Many technologies allow you to write C# (often with attributes)
and then perform interop using your handwritten code as basis for Figure 1 The DLR Runs on Top of the CLR
external actions, such as LINQ-to-SQL, Windows Communication
Foundation (WCF) and serialization. Essentially, C# 4.0 offers a simplified, consistent view of dynamic
The second approach abandons the C# type system entirely—you operations. To take advantage of it, all you need to do is specify that
embed strings and data in your code. This is what you do when- a given value is dynamic, ensuring that analysis of all operations
ever you write code that, say, invokes a method on a JScript object on the value will be delayed until run time.
or when you embed a SQL query in your ADO.NET application. In C# 4.0, dynamic is a built-in type, and a special pseudo-keyword
You’re even doing this when you defer binding to run time using signifies it. Note, however, that dynamic is different from var. Variables
reflection, even though the interop in that case is with .NET itself. declared with var actually do have a strong type, but the programmer
The dynamic keyword in C# is a response to dealing with has left it up to the compiler to figure it out. When the programmer
the hassles of these other approaches. Let’s start with a simple uses dynamic, the compiler doesn’t know what type is being used—
example—reflection. Normally, using it requires a lot of boilerplate the programmer leaves figuring it out up to the runtime.
infrastructure code, such as:
object o = GetObject(); Dynamic and the DLR
Type t = o.GetType(); The infrastructure that supports these dynamic operations at run
object result = t.InvokeMember("MyMethod",
BindingFlags.InvokeMethod, null, time is called the Dynamic Language Runtime (DLR). This new
o, new object[] { }); .NET Framework 4 library runs on the CLR, like any other man-
int i = Convert.ToInt32(result);
aged library. It’s responsible for brokering each dynamic operation
With the dynamic keyword, instead of calling a method
between the language that initiated it and the object it occurs on. If a
MyMethod on some object using reflection in this manner, you
dynamic operation isn’t handled by the object it occurs on, a runtime
can now tell the compiler to please treat o as dynamic and delay all
component of the C# compiler handles the bind. A simplified and
analysis until run time. Code that does that looks like this:
dynamic o = GetObject();
incomplete architecture diagram looks something like Figure 1.
int i = o.MyMethod(); The interesting thing about a dynamic operation, such as a
It works, and it accomplishes the same thing with code that’s dynamic method call, is that the receiver object has an oppor-
much less convoluted. tunity to inject itself into the binding at run time and can, as a
result, completely determine the semantics of any given dynamic
operation. For instance, take a look at the following code:
C# 4.0 offers a simplified, dynamic d = new MyDynamicObject();
d.Bar("Baz", 3, d);

consistent view of If MyDynamicObject was defined as shown here, then you can
imagine what happens:
dynamic operations. class MyDynamicObject : DynamicObject {
public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args, out object result) {

The value of this shortened, simplified C# syntax is perhaps more Console.WriteLine("Method: {0}", binder.Name);
foreach (var arg in args) {
clear if you look at the ScriptObject class that supports operations Console.WriteLine("Argument: {0}", arg);
on a JScript object. The class has an InvokeMember method that has }

more and different parameters, except in Silverlight, which actu- result = args[0];
ally has an Invoke method (notice the difference in the name) with return true;
}
fewer parameters. Neither of these are the same as what you’d need }
to invoke a method on an IronPython or IronRuby object or on In fact, the code prints:
any number of non-C# objects you might come into contact with. Method: Bar
Argument: Baz
In addition to objects that come from dynamic languages, you’ll Argument: 3
find a variety of data models that are inherently dynamic and Argument: MyDynamicObject
have different APIs supporting them, such as HTML DOMs, the By declaring d to be of type dynamic, the code that consumes
System.Xml DOM and the XLinq model for XML. COM objects the MyDynamicObject instance effectively opts out of compile-
are often dynamic and can benefit from the delay to run time of time checking for the operations d participates in. Use of dynamic
some compiler analysis. means “I don’t know what type this is going to be, so I don’t know
70 msdn magazine C# 4.0
what methods or properties there are right now. Compiler, please Because the results of the two indexing expressions are dynamic,
let them all through and then figure it out when you really have the index itself is as well. And because the result of the index
an object at run time.” So the call to Bar compiles even though is dynamic, so is the call to Foo. Then you’re confronted with
the compiler doesn’t know what it means. Then at run time, the converting a dynamic value to a string. That happens dynamically,
object itself is asked what to do with this call to Bar. That’s what of course, because the object could be a dynamic one that wants to
TryInvokeMember knows how to handle. perform some special computation in the face of a conversion request.
Now, suppose that instead of a MyDynamicObject, you used a Notice in the previous examples that C# allows implicit
Python object: conversions from any dynamic expression to any type. The
dynamic d = GetPythonObject(); conversion to string at the end is implicit and did not require an
d.bar("Baz", 3, d);
explicit cast operation. Similarly, any type can be converted to
If the object is the file listed here, then the code also works, and dynamic implicitly.
the output is much the same:
def bar(*args):

Dynamic in C# is all
print "Method:", bar.__name__
for x in args:
print "Argument:", x
Under the covers, for each use of a dynamic value, the compiler
generates a bunch of code that initializes and uses a DLR CallSite. about consuming and using
That CallSite contains all the information needed to bind at run
time, including such things as the method name, extra data, such dynamic objects.
as whether the operation takes place in a checked context, and
information about the arguments and their types.
In this respect, dynamic is a lot like object, and the similarities
This code, if you had to maintain it, would be every bit as ugly
don’t stop there. When the compiler emits your assembly and needs
as the reflection code shown earlier or the ScriptObject code or
to emit a dynamic variable, it does so by using the type object and
strings that contain XML queries. That’s the point of the dynamic
then marking it specially. In some sense, dynamic is kind of an alias
feature in C#—you don’t have to write code like that!
for object, but it adds the extra behavior of dynamically resolving
When using the dynamic keyword, your code can look pretty
operations when you use it.
much the way you want: like a simple method invocation, a call to
You can see this if you try to convert between generic types that
an indexer, an operator, such as +, a cast or even compounds, like +=
differ only in dynamic and object; such conversions will always
or ++. You can even use dynamic values in statements—for example,
work, because at runtime, an instance of List<dynamic> actually
if(d) and foreach(var x in d). Short-circuiting is also supported, with
is an instance of List<object>:
code such as d && ShortCircuited or d ?? ShortCircuited.
List<dynamic> ld = new List<object>();
The value of having the DLR provide a common infrastructure
You can also see the similarity between dynamic and object if you
for these sorts of operations is that you’re no longer having to deal
try to override a method that’s declared with an object parameter:
with a different API for each dynamic model you’d like to code
class C {
against—there’s just a single API. And you don’t even have to use public override bool Equals(dynamic obj) {
it. The C# compiler can use it for you, and that should give you /* ... */
}
more time to actually write the code you want—the less infrastruc- }
ture code you have to maintain means more productivity for you. Although it resolves to a decorated object in your assembly, I
The C# language provides no shortcuts for defining dynamic do like to think of dynamic as a real type, because it serves as a re-
objects. Dynamic in C# is all about consuming and using dynamic minder that you can do most things with it that you can do with
objects. Consider the following: any other type. You can use it as a type argument or, say, as a re-
dynamic list = GetDynamicList(); turn value. For instance, this function definition will let you use
dynamic index1 = GetIndex1();
dynamic index2 = GetIndex2(); the result of the function call dynamically without having to put
string s = list[++index1, index2 + 10].Foo(); its return value in a dynamic variable:
This code compiles, and it contains a lot of dynamic operations. public dynamic GetDynamicThing() {
First, there’s the dynamic pre-increment on index1, then the dynamic /* ... */ }

add with index2. Then a dynamic indexer get is called on list. The There are a lot more details about the way dynamic is treated
product of those operations calls the member Foo. Finally, the total and dispatched, but you don’t need to know them to use the
result of the expression is converted to a string and stored in s. That’s feature. The essential idea is that you can write code that looks like
five dynamic operations in one line, each dispatched at run time. C#, and if any part of the code you write is dynamic, the compiler
The compile-time type of each dynamic operation is itself will leave it alone until run time.
dynamic, and so the “dynamicness” kind of flows from com- I want to cover one final topic concerning dynamic: failure.
putation to computation. Even if you hadn’t included dynamic Because the compiler can’t check whether the dynamic thing you’re
expressions multiple times, there still would be a number of using really has the method called Foo, it can’t give you an error.
dynamic operations. There are still five in this one line: Of course, that doesn’t mean that your call to Foo will work at run
string s = nonDynamicList[++index1, index2 + 10].Foo(); time. It may work, but there are a lot of objects that don’t have a
msdnmagazine.com July 2010 71
method called Foo. When your expression fails to bind at run time, programming and something as simple as the SaveAs method on the
the binder makes its best attempt to give you an exception that’s Document interface. This method has 16 parameters, all of which
more or less exactly what the compiler would’ve told you if you are optional. With previous versions of C#, if you want to call this
hadn’t used dynamic to begin with. method you have to write code that looks like this:
Consider the following code: Document d = new Document();
object filename = "Foo.docx";
try
object missing = Type.Missing;
{
d.SaveAs(ref filename, ref missing, ref missing, ref missing, ref
dynamic d = "this is a string";
missing, ref missing, ref missing, ref missing, ref missing, ref
d.Foo();
missing, ref missing, ref missing, ref missing, ref missing, ref
}
missing, ref missing);
catch (Microsoft.CSharp.RuntimeBinder.RuntimeBinderException e)
{ Now, you can write this:
Console.WriteLine(e.Message); Document d = new Document();
} d.SaveAs(FileName: "Foo.docx");
Here I have a string, and strings clearly do not have a method I would say that’s an improvement for anyone who works with
called Foo. When the line that calls Foo executes, the binding will APIs like this. And improving the lives of programmers who need
fail and you’ll get a RuntimeBinderException. This is what the to write Office programs was definitely a motivating factor for
previous program prints: adding named arguments and optional parameters to the language.
'string' does not contain a definition for 'Foo'
Now, when writing a .NET library and considering adding
Which is exactly the error message you, as a C# programmer, expect. methods that have optional parameters, you’re faced with a choice.
You can either add optional parameters or you can do what C#
Named Arguments and Optional Parameters programmers have done for years: introduce overloads. In the
In another addition to C#, methods now support optional parameters Car.Accelerate example, the latter decision might lead you to
with default values so that when you call such a method you can produce a type that looks like this:
omit those parameters. You can see this in action in this Car class: class Car {
public void Accelerate(uint speed) {
class Car {
Accelerate(speed, null, false);
public void Accelerate(
}
double speed, int? gear = null,
public void Accelerate(uint speed, int? gear) {
bool inReverse = false) {
Accelerate(speed, gear, false);
}
/* ... */
public void Accelerate(uint speed, int? gear,
}
bool inReverse) {
}
/* ... */
You can call the method this way: }
Car myCar = new Car(); }
myCar.Accelerate(55); Selecting the model that suits the library you’re writing is up
This has exactly the same effect as: to you. Because C# hasn’t had optional parameters until now, the
myCar.Accelerate(55, null, false); .NET Framework (including the .NET Framework 4) tends to use
It’s the same because the compiler will insert all the default overloads. If you decide to mix and match overloads with optional
values that you omit. parameters, the C# overload resolution has clear tie-breaking rules
C# 4.0 will also let you call methods by specifying some argu- to determine which overload to call under any given circumstances.
ments by name. In this way, you can pass an argument to an optional
parameter without having to also pass arguments for all the
parameters that come before it.
Say you want to call Accelerate to go in reverse, but you don’t
Selecting the model that suits the
want to specify the gear parameter. Well, you can do this:
myCar.Accelerate(55, inReverse: true);
library you’re writing is up to you.
This is a new C# 4.0 syntax, and it’s the same as if you had written:
Indexed Properties
myCar.Accelerate(55, null, true);
In fact, whether or not parameters in the method you’re calling
Some smaller language features in C# 4.0 are supported only when
are optional, you can use names when passing arguments. For
writing code against a COM interop API. The Word interop in the
instance, these two calls are permissible and identical to one another:
Console.WriteLine(format: "{0:f}", arg0: 6.02214179e23);
previous illustration is one example.
Console.WriteLine(arg0: 6.02214179e23, format: "{0:f}"); C# code has always had the notion of an indexer that you
If you’re calling a method that takes a long list of parameters, you can add to a class to effectively overload the [] operator on
can even use names as a sort of in-code documentation to help you instances of that class. This sense of indexer is also called a default
remember which parameter is which. indexer, since it isn’t given a name and calling it requires no name.
On the surface, optional arguments and named parameters Some COM APIs also have indexers that aren’t default, which is
don’t look like interop features. You can use them without ever to say that you can’t effectively call them simply by using []—you
even thinking about interop. However, the motivation for these must specify a name. You can, alternatively, think of an indexed
features comes from the Office APIs. Consider, for example, Word property as a property that takes some extra arguments.
72 msdn magazine C# 4.0
C# 4.0 supports indexed properties on COM interop types. You of deploying COM interop assemblies with your application.
can’t define types in C# that have indexed properties, but you can When COM interop was introduced in the original version of
use them provided you’re doing so on a COM type. For an example the .NET Framework, the notion of a Primary Interop Assembly
of what C# code that does this looks like, consider the Range (PIA) was created. This was an attempt to solve the problem of
property on an Excel worksheet: sharing COM objects among components. If you had different
using Microsoft.Office.Interop.Excel; interop assemblies that defined an Excel Worksheet, we wouldn’t
class Program { be able to share these Worksheets between components, because
static void Main(string[] args) { they would be different .NET types. The PIA fixed this by existing
Application excel = new Application();
excel.Visible = true; only once—all clients used it, and the .NET types always matched.

Though a fine idea on paper,


Worksheet ws =
excel.Workbooks.Add().Worksheets["Sheet1"];
// Range is an indexed property

in practice deploying a PIA turns


ws.Range["A1", "C3"].Value = 123;
System.Console.ReadLine();
excel.Quit();

out to be a headache.
}
}
In this example, Range[“A1”, “C3”] isn’t a property called Range
that returns a thing that can be indexed. It’s one call to a Range
accessor that passes A1 and C3 with it. And although Value might Though a fine idea on paper, in practice deploying a PIA turns
not look like an indexed property, it, too, is one! All of its arguments out to be a headache, because there’s only one, and multiple
are optional, and because it’s an indexed property, you omit them by applications could try to install or uninstall it. Matters are complicated
not specifying them at all. Before the language supported indexed because PIAs are often large, Office doesn’t deploy them with default
properties, you would have written the call like this: Office installations, and users can circumvent this single assembly sys-
ws.get_Range("A1", "C3").Value2 = 123; tem easily just by using TLBIMP to create their own interop assembly.
Here, Value2 is a property that was added simply because the So now, in an effort to fix this situation, two things have happened:
indexed property Value wouldn’t work prior to C# 4.0. • The runtime has been given the smarts to treat two structurally
identical COM interop types that share the same identifying
Omitting the Ref Keyword at COM Call Sites characteristics (name, GUID and so on) as though they were
Some COM APIs were written with many parameters passed by refer- actually the same .NET type.
ence, even when the implementation doesn’t write back to them. In the • The C# compiler takes advantage of this by simply reproducing
Office suite, Word stands out as an example—its COM APIs all do this. the interop types in your own assembly when you compile,
When you’re confronted with such a library and you need to pass removing the need for the interop assembly to exist at run time.
arguments by reference, you can no longer pass any expression that’s I have to omit some details in the interest of space, but even
not a local variable or field, and that’s a big headache. In the Word without knowledge of the details, this is another feature—like
SaveAs example, you can see this in action—you had to declare a dynamic—you should be able to use without a problem. You tell
local called filename and a local called missing just to call the SaveAs the compiler to embed interop types for you in Visual Studio by
method, since those parameters needed to be passed by reference. setting the Embed Interop Types property on your reference to true.
Document d = new Document();
Because the C# team expects this to be the preferred method of
object filename = "Foo.docx"; referencing COM assemblies, Visual Studio will set this property
object missing = Type.Missing;
d.SaveAs(ref filename, ref missing, // ...
to True by default for any new interop reference added to a C#
You may have noticed in the new C# code that followed, I no project. If you’re using the command-line compiler (csc.exe) to
longer declared a local for filename: build your code, then to embed interop types you must reference
d.SaveAs(FileName: "Foo.docx"); the interop assembly in question using the /L switch rather than /R.
This is possible because of the new omit ref feature for COM Each of the features I’ve covered in this article could itself
interop. Now, when calling a COM interop method, you can generate much more discussion, and the topics all deserve articles
pass any argument by value instead of by reference. If you do, the of their own. I’ve omitted or glossed over many details, but I hope
compiler will create a temporary local on your behalf and pass this serves as a good starting point for exploring C# 4.0 and you
that local by reference for you if required. Of course, you won’t be find time to investigate and make use of these features. And if you
able to see the effect of the method call if the method mutates the do, I hope you enjoy the benefits in productivity and program read-
argument—if you want that, pass the argument by ref. ability they were designed to give you. „
This should make code that uses APIs like this much cleaner.
CHRIS BURROWS is a developer at Microsoft on the C# compiler team. He imple-
mented dynamic in the C# compiler and has been involved with the development
Embedding COM Interop Types of Visual Studio for nine years.
This is more of a C# compiler feature than a C# language feature, but
now you can use a COM interop assembly without that assembly THANKS to the following technical expert for reviewing this article:
having to be present at run time. The goal is to reduce the burden Eric Lippert
msdnmagazine.com July 2010 73
D E S I G N PAT T E R N S

Problems and Solutions


with Model-View-
ViewModel
Robert McCarter

Windows Presentation Foundation (WPF) and Silverlight layer? How are related properties within the Model handled
provide rich APIs for building modern applications, but under- elegantly? How should you expose collections within the Model to
standing and applying all the WPF features in harmony with the View? Where should ViewModel objects be instantiated and
each other to build well-designed and easily maintained apps hooked up to Model objects?
can be difficult. Where do you start? And what is the right way to In this article I’ll explain how the ViewModel works, and discuss
compose your application? some benefits and issues involved in implementing a ViewModel
The Model-View-ViewModel (MVVM) design pattern in your code. I’ll also walk you through some concrete examples
describes a popular approach for building WPF and Silverlight of using ViewModel as a document manager for exposing Model
applications. It’s both a powerful tool for building applications objects in the View layer.
and a common language for discussing application design
with developers. While MVVM is a really useful pattern, it’s still Model, ViewModel and View
relatively young and misunderstood. Every WPF and Silverlight application I’ve worked on so far had
When is the MVVM design pattern applicable, and when is it the same high-level component design. The Model was the core of
unnecessary? How should the application be structured? How the application, and a lot of effort went into designing it according
much work is the ViewModel layer to write and maintain, and what to object-oriented analysis and design (OOAD) best practices.
alternatives exist for reducing the amount of code in the ViewModel For me the Model is the heart of the application, representing
the biggest and most important business asset because it cap-
This article discusses: tures all the complex business entities, their relationships and
their functionality.
• Model, ViewModel, and View
Sitting atop the Model is the ViewModel. The two primary goals
• Why use a ViewModel?
of the ViewModel are to make the Model easily consumable by the
• Using dynamic properties WPF/XAML View and to separate and encapsulate the Model from
• A document manager adapter the View. These are excellent goals, although for pragmatic reasons
Technologies discussed: they’re sometimes broken.
Windows Presentation Foundation, Silverlight
You build the ViewModel knowing how the user will interact
with the application at a high level. However, it’s an important part

74 msdn magazine
of the MVVM design pattern that the ViewModel knows nothing pattern. As a result, if a Model class has 10 properties that need to
about the View. This allows the interaction designers and graphics be exposed in the View, the ViewModel typically ends up having
artists to create beautiful, functional UIs on top of the ViewModel 10 identical properties that simply proxy the call to the underlying
while working closely with the developers to design a suitable View- model instance. These proxy properties usually raise a property-
Model to support their efforts. In addition, decoupling between changed event when set to indicate to the View that the property
View and ViewModel also allows the ViewModel to be more unit has been changed.
testable and reusable. Not every Model property needs to have a ViewModel proxy
To help enforce a strict separation between the Model, View and property, but every Model property that needs to be exposed in
ViewModel layers, I like to build each layer as a separate Visual the View will typically have a proxy property. The proxy properties
Studio project. Combined with the reusable utilities, the main usually look like this:
executable assembly and any unit testing projects (you have plenty public string Description {
get {
of these, right?), this can result in a lot of projects and assemblies, return this.UnderlyingModelInstance.Description;
as illustrated in Figure 1. }
set {
Given the large number of projects, this strict-separation this.UnderlyingModelInstance.Description = value;
approach is obviously most useful on large projects. For small this.RaisePropertyChangedEvent("Description");
}
applications with only one or two developers, the benefits of this }
strict separation may not outweigh the inconvenience of creating, Any non-trivial application will have tens or hundreds of Model
configuring and maintaining multiple projects, so simply separating classes that need to be exposed to the user through the ViewModel
your code into different namespaces within the same project may in this fashion. This is simply intrinsic to the separation provided
provide more than sufficient isolation. by MVVM.
Writing and maintaining a ViewModel is not trivial and it should

The Model is the heart


not be undertaken lightly. However, the answer to the most basic
questions—when should you consider the MVVM design pattern

of the application.
and when is it unnecessary—is often found in your domain model.
In large projects, the domain model may be very complex, with
hundreds of classes carefully designed to work elegantly together
for any type of application, including Web services, WPF or ASP. Writing these proxy properties is boring and therefore error-
NET applications. The Model may comprise several assemblies prone, especially because raising the property-changed event
working together, and in very large organizations the domain model is requires a string that must match the name of the property (and will
sometimes built and maintained by a specialized development not be included in any automatic code refactoring). To eliminate
team. these proxy events, the common solution is to expose the model
When you have a large and complex domain model, it’s almost instance from the ViewModel wrapper directly, then have the
always beneficial to introduce a ViewModel layer. domain model implement the INotifyPropertyChanged interface:
On the other hand, sometimes the domain model is simple, perhaps public class SomeViewModel {
public SomeViewModel( DomainObject domainObject ) {
nothing more than a thin layer over the database. The classes Contract.Requires(domainObject!=null,
may be automatically generated and they frequently implement "The domain object to wrap must not be null");
this.WrappedDomainObject = domainObject;
INotifyPropertyChanged. The UI is commonly a collection of }
lists or grids with edit forms allowing the user to manipulate the public DomainObject WrappedDomainObject {
get; private set;
underlying data. The Microsoft toolset has always been very good }
at building these kinds of applications quickly and easily. ...
If your model or application falls into this category, a ViewModel Thus, the ViewModel can still expose the commands and addi-
would probably impose unacceptably high overhead without tional properties required by the view without duplicating Model
sufficiently benefitting your application design. properties or creating lots of proxy properties. This approach
That said, even in these cases the ViewModel can still provide value. certainly has its appeal, especially if the Model classes already
For example, the ViewModel is an excellent place to implement undo implement the INotifyPropertyChanged interface. Having
functionality. Alternatively, you can choose to use MVVM for a portion the model implement this interface isn’t necessarily a bad thing
of the application (such as document management, as I’ll discuss later) and it was even common with Microsoft .NET Framework 2.0
and pragmatically expose your Model directly to the View. and Windows Forms applications. It does clutter up the domain
model, though, and wouldn’t be useful for ASP.NET applications
Why Use a ViewModel? or domain services.
If a ViewModel seems appropriate for your application, there are With this approach the View has a dependency on the Model,
still questions to be answered before you start coding. One of the but it’s only an indirect dependency through data binding, which
first is how to reduce the number of proxy properties. does not require a project reference from the View project to the
The separation of the View from the Model promoted by the Model project. So for purely pragmatic reasons this approach is
MVVM design pattern is an important and valuable aspect of the sometimes useful.
msdnmagazine.com July 2010 75
However, this approach does violate the spirit of the MVVM The method starts by using reflection to find the property on
design pattern, and it reduces your ability to introduce new View- the underlying Model instance. (For more details, see the June
Model-specific functionality later (such as undo capabilities). I’ve 2007 “CLR Inside Out” column “Reflections on Reflection” at
encountered scenarios with this approach that caused a fair bit of msdn.microsoft.com/magazine/cc163408.) If the model doesn’t have such
rework. Imagine the not-uncommon situation where there’s a data a property, then the method fails by returning false and the data
binding on a deeply nested property. If the Person ViewModel is binding fails. If the property exists, the method uses the property
the current data context, and the Person has an Address, the data information to retrieve and return the Model’s property value. This
binding might look something like this: is more work than the traditional proxy property’s get method, but
{Binding WrappedDomainObject.Address.Country} this is the only implementation you need to write for all models
If you ever need to introduce additional ViewModel function- and all properties.
ality on the Address object, you’ll need to remove data binding The real power of the dynamic proxy property approach is in
references to WrappedDomainObject.Address and instead use the property setters. In TrySetMember, you can include common
new ViewModel properties. This is problematic because updates logic such as raising property-changed events. The code looks
to the XAML data binding (and possibly the data contexts as well) something like this:
are hard to test. The View is the one component that doesn’t have public override bool TrySetMember(
automated and comprehensive regression tests. SetMemberBinder binder, object value) {

string propertyName = binder.Name;


PropertyInfo property =

The ViewModel is built this.WrappedDomainObject.GetType().GetProperty(propertyName);

if( property==null || property.CanWrite==false )

knowing how the user will return false;

property.SetValue(this.WrappedDomainObject, value, null);

interact with the application this.RaisePropertyChanged(propertyName);


return true;

at a high level. }
Again, the method starts by using reflection to grab the property
from the underlying Model instance. If the property doesn’t exist
or the property is read-only, the method fails by returning false. If the
Dynamic Properties
property exists on the domain object, the property information is used
My solution to the proliferation of proxy properties is to use the
to set the Model property. Then you can include any logic common to
new .NET Framework 4 and WPF support for dynamic objects
all property setters. In this sample code I simply raise the
and dynamic method dispatch. The latter allows you to determine
property-changed event for the property I just set, but you can
at run time how to handle reading or writing to a property that
easily do more.
does not actually exist on the class. This means you can eliminate
One of the challenges of encapsulating a Model is that the
all the handwritten proxy properties in the ViewModel while still
Model frequently has what Unified Modeling Language calls
encapsulating the underlying model. Note, however, that Silverlight
derived properties. For example, a Person class probably has a Birth-
4 does not support binding to dynamic properties.
Date property and a derived Age property. The Age property is read-
The simplest way to implement this capability is to have the View-
only and automatically calculates the age based on the birth date
Model base class extend the new System.Dynamic.DynamicObject
and the current date:
class and override the TryGetMember and TrySetMember methods.
public class Person : DomainObject {
The Dynamic Language Runtime (DLR) calls these two methods public DateTime BirthDate {
when the property being referenced does not exist on the class, get; set;
}
allowing the class to determine at run time how to implement the
missing properties. Combined with a small amount of reflection, public int Age {
get {
the ViewModel class can dynamically proxy the property access var today = DateTime.Now;
to the underlying model instance in only a few lines of code: // Simplified demo code!
int age = today.Year - this.BirthDate.Year;
public override bool TryGetMember(
return age;
GetMemberBinder binder, out object result) {
}
}
string propertyName = binder.Name;
...
PropertyInfo property =
this.WrappedDomainObject.GetType().GetProperty(propertyName); When the BirthDate property changes, the Age property also
if( property==null || property.CanRead==false ) {
implicitly changes because the age is derived mathematically from
result = null; the birth date. So when the BirthDate property is set, the View-
return false;
}
Model class needs to raise a property-changed event for both the
BirthDate property and the Age property. With the dynamic View-
result = property.GetValue(this.WrappedDomainObject, null);
return true;
Model approach, you can do this automatically by making this
} inter-property relationship explicit within the model.
76 msdn magazine Design Patterns
First, you need a custom attribute to capture the property to introduce a ViewModel around the Address, you simply add a
relationship: new property on the Person ViewModel class. The new Address
[AttributeUsage(AttributeTargets.Property, AllowMultiple=true)] property is very simple:
public sealed class AffectsOtherPropertyAttribute : Attribute {
public DynamicViewModel Address {
public AffectsOtherPropertyAttribute(
get {
string otherPropertyName) {
if( addressViewModel==null )
this.AffectsProperty = otherPropertyName;
addressViewModel =
}
new DynamicViewModel(this.Person.Address);
return addressViewModel;
public string AffectsProperty {
}
get;
}
private set;
}
private DynamicViewModel addressViewModel;
}
I set AllowMultiple to true to support scenarios where a property No XAML data bindings need to be changed because the property is
can affect multiple other properties. Applying this attribute to still called Address, but now the DLR calls the new concrete property
codify the relationship between BirthDate and Age directly in the rather than the dynamic TryGetMember method. (Notice that the
model is straightforward: lazy instantiation within this Address property is not thread-safe.
[AffectsOtherProperty("Age")] However, only the View should be accessing the ViewModel and the
public DateTime BirthDate { get; set; } WPF/Silverlight view is single-threaded, so this is not a concern.)
To use this new model metadata within the dynamic ViewModel This approach can be used even when the model implements
class, I can now update the TrySetMember method with three INotifyPropertyChanged. The ViewModel can notice this and
additional lines of code, so it looks like this: choose not to proxy property-changed events. In this case, it listens
public override bool TrySetMember( for them from the underlying model instance and then re-raises the
SetMemberBinder binder, object value) {
... events as its own. In the constructor of the dynamic ViewModel
var affectsProps = property.GetCustomAttributes( class, I perform the check and remember the result:
typeof(AffectsOtherPropertyAttribute), true);
foreach(AffectsOtherPropertyAttribute otherPropertyAttr public DynamicViewModel(DomainObject model) {
in affectsProps) Contract.Requires(model != null,
this.RaisePropertyChanged( "Cannot encapsulate a null model");
otherPropertyAttr.AffectsProperty); this.ModelInstance = model;
}
// Raises its own property changed events
With the reflected property information already in hand, the Get- if( model is INotifyPropertyChanged ) {
CustomAttributes method can return any AffectsOtherProperty this.ModelRaisesPropertyChangedEvents = true;
var raisesPropChangedEvents =
attributes on the model property. Then the code simply loops over model as INotifyPropertyChanged;
the attributes, raising property-changed events for each one. So raisesPropChangedEvents.PropertyChanged +=
(sender,args) =>
changes to the BirthDate property through the ViewModel now this.RaisePropertyChanged(args.PropertyName);
automatically raise both BirthDate and Age property-changed events. }
}
It’s important to realize that if you explicitly program a property on To prevent duplicate property-changed events, I also need to
the dynamic ViewModel class (or, more likely, on model-specific de- make a slight modification to the TrySetMember method.
rived ViewModel classes), the DLR will not call the TryGetMember if( this.ModelRaisesPropertyChangedEvents==false )
and TrySetMember methods and will instead call the properties this.RaisePropertyChanged(property.Name);
directly. In that case, you lose this automatic behavior. However, So you can use a dynamic proxy property to dramatically simplify
the code could easily be refactored so that custom properties could the ViewModel layer by eliminating standard proxy properties.
use this functionality as well.
Let’s return to the problem of the
The Core <<library>> <<unit tests>>
data binding on a deeply nested Application View (XAML) WpfUtilities WpfUtilities Tests
property (where the ViewModel
is the current WPF data context)
<<unit tests>>
that looks like this: View Model ViewModel Tests
{Binding WrappedDomainObject.
Address.Country}
Main Executable
<<unit tests>>
Using dynamic proxy proper- Domain Model Model Tests
ties means the underlying wrapped
domain object is no longer ex-
<<unit tests>>
posed, so the data binding would Data Access Layer Data Access Layer Tests
actually look like this:
{Binding Address.Country}
In this case, the Address proper- <<library>> <<unit tests>>
ty would still access the underlying Utilities Utilities Tests
model Address instance directly.
However, now when you want Figure 1 The Components of an MVVM Application
msdnmagazine.com July 2010 77
This significantly reduces coding,
Third Party Tabbed Workspace & Docking Controls
testing, documentation and long-
term maintenance. Adding new
properties to the model no longer
DocumentManagerAdapter
requires updating the ViewModel
layer unless there is very special View
View logic for the new property. ViewModel
Additionally, this approach can <<Singleton>>
solve difficult issues like related DocumentManager Document
0..*
properties. The common TrySet-
Member method could also help
you implement an undo capability ViewAllClientsDocument ClientDocument MortgageDocument ... etc. ...
because user-driven property
changes all flow through the Try- Figure 2 Document Manager View Adapter
SetMember method.
Collections
Pros and Cons Collections are one of the most difficult and least satisfactory
Many developers are leery of reflection (and the DLR) because of aspects of the MVVM design pattern. If a collection in the under-
performance concerns. In my own work I haven’t found this to be lying Model is changed by the Model, it’s the responsibility of the
a problem. The performance penalty for the user when setting a ViewModel to somehow expose the change so that the View can
single property in the UI is not likely to be noticed. That may update itself appropriately.
not be the case in highly interactive UIs, such as multi-touch Unfortunately, in all likelihood the Model does not expose
design surfaces. collections that implement the INotifyCollectionChanged
The only major performance issue is in the initial population interface. In the .NET Framework 3.5, this interface is in the
of the view when there are a large number of fields. Usability System.Windows.dll, which strongly discourages its use in the
concerns should naturally limit the number of fields you’re Model. Fortunately, in the .NET Framework 4, this interface has
exposing on any screen so that the performance of the initial data migrated to System.dll, making it much more natural to use
bindings through this DLR approach is undetectable. observable collections from within the Model.
Nevertheless, performance should always be carefully monitored Observable collections in the Model open up new possi-
and understood as it relates to the user experience. The simple bilities for Model development and could be used in Win-
approach previously described could be rewritten with reflection dows Forms and Silverlight applications. This is currently
caching. For additional details, see Joel Pobar’s article in the July my preferred approach because it’s much simpler than any-
2005 issue of MSDN Magazine (msdn.microsoft.com/magazine/cc163759). thing else, and I’m happy the INotifyCollectionChanged
There is some validity to the argument that code readability interface is moving to a more common assembly.
and maintainability are negatively affected using this approach Without observable collections in the Model, the best that can be
because the View layer seems to be referencing properties on done is to expose some other mechanism—most likely custom events—
the ViewModel that don’t actually exist. However, I believe the on the Model to indicate when the collection has changed. This should
benefits of eliminating most of the hand-coded proxy properties be done in a Model-specific way. For example, if the Person class had
far outweigh the problems, especially with proper documentation a collection of addresses it could expose events such as:
on the ViewModel. public event EventHandler<AddressesChangedEventArgs>
NewAddressAdded;
The dynamic proxy property approach does reduce or eliminate public event EventHandler<AddressesChangedEventArgs>
the ability to obfuscate the Model layer because the properties on AddressRemoved;
the Model are now referenced by name in the XAML. Using tradi- This is preferable to raising a custom collection event de-
tional proxy properties does not limit your ability to obfuscate the signed specifically for the WPF ViewModel. However, it’s still
Model because the properties are referenced directly and would difficult to expose collection changes in the ViewModel. Likely,
be obfuscated with the rest of the application. However, as most the only recourse is to raise a property-changed event on the
obfuscation tools do not yet work with XAML/BAML, this is largely entire ViewModel collection property. This is an unsatisfactory
irrelevant. A code cracker can start from the XAML/BAML and solution at best.
work into the Model layer in either case. Another problem with collections is determining when or if to
Finally, this approach could be abused by attributing model wrap each Model instance in the collection within a ViewModel
properties with security-related metadata and expecting the View- instance. For smaller collections, the ViewModel may expose a
Model to be responsible for enforcing security. Security doesn’t seem new observable collection and copy everything in the underlying
like a View-specific responsibility, and I believe this is placing too many Model collection into the ViewModel observable collection,
responsibilities on the ViewModel. In this case, an aspect-oriented wrapping each Model item in the collection in a corresponding
approach applied within the Model would be more suitable. ViewModel instance as it goes. The ViewModel might need to
78 msdn magazine Design Patterns
listen for collection-changed events to transmit user changes back Document Manager Adapter
to the underlying Model. The adapter design shown in Figure 2 ensures that the ViewModel
However, for very large collections that will be exposed in doesn’t require any reference to the View, so it respects the main
some form of virtualizing panel, the easiest and most pragmatic goals of the MVVM design pattern. (However, in this case, the
approach is just to expose the Model objects directly. concept of a document is defined in the ViewModel layer rather
than the Model layer because it’s purely a UI concept.)
Instantiating the ViewModel
Another problem with the MVVM design pattern that’s seldom
discussed is where and when the ViewModel instances should The real power of the dynamic
be instantiated. This problem is also frequently overlooked in
discussions of similar design patterns such as MVC. proxy property approach is in
My preference is to write a ViewModel singleton that provides
the main ViewModel objects from which the View can easily the property setters.
retrieve all other ViewModel objects as required. Often this master
ViewModel object provides the command implementations so
the View can support opening of documents. The ViewModel document manager is responsible for maintaining
However, most of the applications I’ve worked with provide a the collection of open ViewModel documents and knowing which
document-centric interface, usually using a tabbed workspace similar document is currently active. This design allows the ViewModel lay-
to Visual Studio. So in the ViewModel layer I want to think in terms er to open and close documents using the document manager, and to
of documents, and the documents expose one or more ViewModel change the active document without any knowledge of the View. The
objects wrapping particular Model objects. Standard WPF commands ViewModel side of this approach is reasonably straightforward. The
in the ViewModel layer can then use the persistence layer to retrieve ViewModel classes in the sample application are shown in Figure 3.
the necessary objects, wrap them in ViewModel instances and create The Document base class exposes several internal lifecycle methods
ViewModel document managers to display them. (Activated, LostActivation and DocumentClosed) that are called
In the sample application included with this article, the ViewModel by the document manager to keep the document up-to-date
command for creating a new Person is: about what’s going on. The document also implements an INotify-
internal class OpenNewPersonCommand : ICommand { PropertyChanged interface so that it can support data binding. For
... example, the adapter data binds the view document’s Title property
// Open a new person in a new window.
public void Execute(object parameter) { to the ViewModel’s DocumentTitle property.
var person = new MvvmDemo.Model.Person(); The most complex piece of this approach is the adapter class,
var document = new PersonDocument(person);
DocumentManager.Instance.ActiveDocument = document; and I’ve provided a working copy in the project accompanying this
} article. The adapter subscribes to events on the document manager
}
and uses those events to keep the tabbed-workspace control
The ViewModel document manager
referenced in the last line is a singleton
that manages all open ViewModel doc-
uments. The question is, how does the
collection of ViewModel documents get
exposed in the View?
The built-in WPF tab control does not
provide the kind of powerful multiple-
document interface users have come to
expect. Fortunately, third-party docking
and tabbed-workspace products are avail-
able. Most of them strive to emulate the
tabbed document look of Visual Studio,
including the dockable tool windows,
split views, Ctrl+Tab pop-up windows
(with mini-document views) and more.
Unfortunately, most of these compo-
nents don’t provide built-in support for
the MVVM design pattern. But that’s
OK, because you can easily apply the
Adapter design pattern to link the View-
Model document manager to the third-
party view component. Figure 3 The ViewModel Layer’s Document Manager and Document Classes
msdnmagazine.com July 2010 79
Figure 4 Linking the View Control and ViewModel Document Figure 5 Setting the Attached Property
private static readonly DependencyProperty private AvalonDock.DocumentContent CreateNewViewDocument(
ViewModelDocumentProperty = Document viewModelDocument) {
DependencyProperty.RegisterAttached(
"ViewModelDocument", typeof(Document), var viewDoc = new AvalonDock.DocumentContent();
typeof(DocumentManagerAdapter), null); viewDoc.DataContext = viewModelDocument;
viewDoc.Content = viewModelDocument;
private static Document GetViewModelDocument(
AvalonDock.ManagedContent viewDoc) { Binding titleBinding = new Binding("DocumentTitle") {
Source = viewModelDocument };
return viewDoc.GetValue(ViewModelDocumentProperty)
as Document; viewDoc.SetBinding(AvalonDock.ManagedContent.TitleProperty,
} titleBinding);
viewDoc.Closing += OnUserClosingDocument;
private static void SetViewModelDocument( DocumentManagerAdapter.SetViewModelDocument(viewDoc,
AvalonDock.ManagedContent viewDoc, Document document) { viewModelDocument);

viewDoc.SetValue(ViewModelDocumentProperty, document); return viewDoc;


} }

up-to-date. For example, when the document manager indicates I’ve used this ViewModel document-manager approach with
that a new document has been opened, the adapter receives an both WPF and Silverlight successfully. The only View layer code
event, wraps the ViewModel document in whatever WPF control is the adapter, and this can be tested easily and then left alone. This
is required and then exposes that control in the tabbed workspace. approach keeps the ViewModel completely independent of the
The adapter has one other responsibility: keeping the View- View, and I have on one occasion switched vendors for my tabbed
Model document manager synchronized with the user’s actions. The workspace component with only minimal changes in the adapter
adapter must therefore also listen for events from the tabbed work- class and absolutely no changes to the ViewModel or Model.
space control so that when the user changes the active document or The ability to work with documents in the ViewModel layer feels
closes a document the adapter can notify the document manager. elegant, and implementing ViewModel commands like the one I
While none of this logic is very complex, there are some caveats. demonstrated here is easy. The ViewModel document classes also
There are several scenarios where the code becomes re-entrant, and become obvious places to expose ICommand instances related to
this must be handled gracefully. For example, if the ViewModel the document.
uses the document manager to close a document, the adapter will

Observable collections in the


receive the event from the document manager and close the physical
document window in the view. This causes the tabbed workspace

Model open up new possibilities.


control to also raise a document-closing event, which the adapter
will also receive, and the adapter’s event handler will, of course,
notify the document manager that the document should be closed.
The document has already been closed, so the document manager The View hooks into these commands and the beauty of the
needs to be sympathetic enough to allow this. MVVM design pattern shines through. Additionally, the View-
The other difficulty is that the View’s adapter must be able to Model document manager approach also works with the singleton
link a View tabbed-document control with a ViewModel Docu- approach if you need to expose data before the user has created any
ment object. The most robust solution is to use a WPF attached documents (perhaps in a collapsible tool window).
dependency property. The adapter declares a private attached
dependency property that’s used to link the View window control Wrap Up
to its ViewModel document instance. The MVVM design pattern is a powerful and useful pattern, but
In the sample project for this article, I use an open source tabbed no design pattern can solve every issue. As I’ve demonstrated here,
workspace component called AvalonDock, so my attached depen- combining the MVVM pattern and goals with other patterns, such
dency property looks like the code shown in Figure 4. as adapters and singletons, while also leveraging new .NET Frame-
When the adapter creates a new View window control, it sets work 4 features, such as dynamic dispatch, can help address many
the attached property on the new window control to the under- common concerns around implementing the MVVM design
lying ViewModel document (see Figure 5). You can also see the pattern. Employing MVVM the right way makes for much more
title data binding being configured here, and see how the adapter elegant and maintainable WPF and Silverlight applications. For further
is configuring both the data context and the content of the View reading about MVVM, see Josh Smith’s article in the February 2009
document control. issue of MSDN Magazine at msdn.microsoft.com/magazine/dd419663. „
By setting the View document control’s content, I let WPF do
the heavy lifting of figuring out how to display this particular ROBERT MCCARTER is a Canadian freelance software developer, architect and
type of ViewModel document. The actual data templates for the entrepreneur. Read his blog at robertmccarter.wordpress.com.
ViewModel documents are in a resource dictionary included by THANKS to the following technical expert for reviewing this article:
the main XAML window. Josh Smith

80 msdn magazine Design Patterns


Project1 10/30/09 1:28 PM Page 1

DynamicPDF Viewer
O u r n e w, c u s t o m i z a b l e
DynamicPDF Viewer allows you
to display PDF documents within
any WinForm application. No longer
rely on an external viewer for displaying
your PDF documents. DynamicPDF Viewer
utilizes the proven reliable and efficient
Foxit PDF viewing engine and maximizes
performance and compatibility with our other
DynamicPDF products.

DynamicPDF Converter
Our DynamicPDF Converter library can efficiently
convert over 30 document types (including HTML and
all common Office file formats) to PDF. Events can be
used to manage the action taken on a successful or failed
conversion. It is highly intuitive and flexible and

Try our three integrates well with our other DynamicPDF products.

DynamicPDF Rasterizer
new products Our DynamicPDF Rasterizer library can quickly convert PDF
documents to over 10 common image formats including
FREE today! multi-page TIFF. Rasterizing form field values as well as
annotations is fully supported. PDFs can also be rasterized
to a System.Drawing.Bitmap class for further manipulation.
Fully functional and never
expiring evaluation To learn more about these or any of our other popular tools:
editions available at DynamicPDF Generator, DynamicPDF Merger, DynamicPDF ReportWriter,
www.cete.com/download DynamicPDF Suite, DynamicPDF WebCache or Firemail, visit us online.

ceTe Software has been delivering quality software applications and components to our customers for over 10 years. Our
DynamicPDF product line has proven our commitment to delivering innovative software components and our ability to
respond to the changing needs of software developers. We back our products with a first class support team trained to
provide timely, accurate and thorough responses to any support needs.
SECURITY BRIEFS BRYAN SULLIVAN

View State Security

Effectively managing user state in Web applications can be a decode and deserialize this string using the limited object
tricky balancing act of performance, scalability, maintainability serialization (LOS) formatter class System.Web.UI.LosFormatter:
and security. The security consideration is especially evident when LosFormatter formatter = new LosFormatter();
object viewstateObj = formatter.Deserialize("/
you’re managing user state stored on the client. I have a colleague wEPDwULLTE2MTY2ODcyMjkPFgIeCHBhc3N3b3JkBQlzd29yZGZpc2hkZA==");
who used to say that handing state data to a client is like handing A quick peek in the debugger (see Figure 1) reveals that the dese-
an ice cream cone to a 5-year-old: you may get it back, but you rialized view state object is actually a series of System.Web.UI.Pair
definitely can’t expect to get it back in the same shape it was when objects ending with a System.Web.UI.IndexedString object with a
you gave it out! value of “password” and a corresponding string value of “swordfish.”
In this month’s column, we’ll examine some security implications
around client-side state management in ASP.NET applications;
specifically, we’re going to look at view state security. (Please note: Encryption does not provide
this article assumes that you’re familiar with the concept of ASP.NET
view state. If not, check out “Understanding ASP.NET View State” defense against tampering.
by Scott Mitchell at msdn.microsoft.com/library/ms972976).
If you don’t think there’s any data stored in your applications’ Even with encrypted data, it’s still
view state worth protecting, think again. Sensitive information
can find its way into view state without you even realizing it. And possible for an attacker to flip
even if you’re vigilant about preventing sensitive information loss
through view state, an attacker can still tamper with that view state bits in the encrypted data.
and cause even bigger problems for you and your users. Luckily,
ASP.NET has some built-in defenses against these attacks. Let’s take If you don’t want to go to the trouble of writing your own code
a look at how these defenses can be used correctly. to deserialize view state objects, there are several good view state
decoders available for free download on the Internet, including Fritz
Threat No. 1: Information Disclosure Onion’s ViewState Decoder tool available at alt.pluralsight.com/tools.aspx.
At Microsoft, development teams use the STRIDE model to classify
threats. STRIDE is a mnemonic that stands for: Encrypting View State
• Spoofing In “The Security Development Lifecycle: SDL: A Process for
• Tampering Developing Demonstrably More Secure Software” (Microsoft Press,
• Repudiation 2006), Michael Howard and Steve Lipner discuss technologies that
• Information Disclosure can be used to mitigate STRIDE threats. Figure 2 shows threat
• Denial of Service types and their associated mitigation techniques.
• Elevation of Privilege Because we’re dealing with an information disclosure threat to
The main two STRIDE categories of concern from the view state our data stored in the view state, we need to apply a confidentiality
security perspective are Information Disclosure and Tampering mitigation technique; the most effective confidentiality mitigation
(although a successful tampering attack can lead to a possible Elevation technology in this case is encryption.
of Privilege; we’ll discuss that in more detail later). Information disclo- ASP.NET version 2.0 has a built-in feature to enable encryp-
sure is the simpler of these threats to explain, so we’ll discuss that first. tion of view state—the ViewStateEncryptionMode property,
One of the most unfortunately persistent misconceptions around which can be enabled either through a page directive or in the
view state is that it is encrypted or somehow unreadable by the user. application’s web.config file:
After all, a view state string certainly doesn’t look decomposable: <%@ Page ViewStateEncryptionMode="Always" %>
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/
wEPDwULLTE2MTY2ODcyMjkPFgIeCHBhc3N3b3JkBQlzd29yZGZpc2hkZA==" /> Or
<configuration>
However, this string is merely base64-encoded, not encrypted <system.web>
with any kind of cryptographically strong algorithm. We can easily <pages viewStateEncryptionMode="Always">

82 msdn magazine
Figure 1 Secret View State Data Revealed by the Debugger

There are three possible values for ViewStateEncryptionMode: You can explicitly set both the cryptographic algorithm and
Always (the view state is always encrypted); Never (the view state the key to use in the machineKey element of your application’s
is never encrypted); and Auto (the view state is only encrypted if web.config file:
one of the page’s controls explicitly requests it). The Always and <configuration>
<system.web>
Never values are pretty self-explanatory, but Auto requires a little <machineKey decryption="AES" decryptionKey="143a…">
more explanation. For the encryption algorithm, you can choose AES (the default
If a server control persists sensitive information into its page’s value), DES or 3DES. Of these, DES is explicitly banned by the
view state, the control can request that the page encrypt the Microsoft SDL Cryptographic Standards, and 3DES is strongly dis-
view state by calling the Page.RegisterRequiresViewState- couraged. I recommend that you stick with AES for maximum security.
Encryption method (note that in this case the entire view Once you’ve selected an algorithm, you need to create a key.
state is encrypted, not just the view state corresponding to the However, remember that the strength of this system’s security
control that requested it): depends on the strength of that key. Don’t use your pet’s name, your
public class MyServerControl : WebControl
{
significant other’s birthday or any other easily guessable value! You
protected override void OnInit(EventArgs e) need to use a cryptographically strong random number. Here’s
{
Page.RegisterRequiresViewStateEncryption();
a code snippet to create one in the format that the machineKey
base.OnInit(e); element expects (hexadecimal characters only) using the .NET
}
...
RNGCryptoServiceProvider class:
} RNGCryptoServiceProvider csp = new RNGCryptoServiceProvider();
byte[] data = new byte[24];
However, there is a caveat. The reason the method is named csp.GetBytes(data);
RegisterRequiresViewStateEncryption, and not something like string value = String.Join("", BitConverter.ToString(data).Split('-'));
EnableViewStateEncryption, is because the page can choose to At a minimum, you should generate 16-byte random values
ignore the request. If the page’s ViewStateEncryptionMode is set for your keys; this is the minimum value allowed by the SDL
to Auto (or Always), the control’s request will be granted and the Cryptographic Standards. The maximum length supported for AES
view state will be encrypted. If ViewStateEncryptionMode is set keys is 24 bytes (48 hex chars) in the Microsoft .NET Framework
to Never, the control’s request will be ignored and the view state 3.5 and earlier, and 32 bytes (64 hex chars) in the .NET Framework
will be unprotected. 4. DES supports a maximum key length of only 8 bytes and 3DES a
This is definitely something to be aware of if you’re a control maximum of 24 bytes, regardless of the framework version. Again,
developer. You should consider keeping potentially sensitive I recommend that you avoid these algorithms and use AES instead.
information out of the view state (which is always a good idea). In
extreme cases where this isn’t possible, you might consider over- Threat No. 2: Tampering
riding the control’s SaveViewState and LoadViewState methods to Tampering is the other significant threat. You might think the same
manually encrypt and decrypt the view state there. encryption defense that keeps attackers from prying into the view state
would also prevent them from changing it, but this is wrong. Encryp-
Server Farm Considerations tion doesn’t provide defense against tampering: Even with encrypted
In a single-server environment, it’s sufficient just to enable View- data, it’s still possible for an attacker to flip bits in the encrypted data.
StateEncryptionMode, but in a server farm environment there’s
some additional work to do. Symmetric encryption algorithms— Figure 2 Techniques to Mitigate STRIDE Threats
like the ones that ASP.NET uses to encrypt the view state—require Threat Type Mitigation Technique
a key. You can either explicitly specify a key in the web.config file, Spoofing Authentication
or ASP.NET can automatically generate a key for you. Again, in Tampering Integrity
a single-server environment it’s fine to let the framework handle
Repudiation Non-repudiation services
key generation, but this won’t work for a server farm. Each server
Information Disclosure Confidentiality
will generate its own unique key, and requests that get load
Denial of Service Availability
balanced between different servers will fail because the decryp-
tion keys won’t match. Elevation of Privilege Authorization

msdnmagazine.com July 2010 83


or in a configuration file (more on this later)—it is never written
Plaintext MAC to the page. Without knowing the key, there would be no way for
an attacker to compute a valid view state hash.
Theoretically, with enough computing power an attacker could
Plaintext Secret Key reverse-engineer the key: He has knowledge of a computed hash
Hash Function value and knowledge of the corresponding plaintext, and there
aren’t too many options available for the hash algorithm. He would
Figure 3 Applying a Message Authentication Code (MAC) only have to cycle through all the possible key values, re-compute
the hash for the known plaintext plus the current key and
Take another look at Figure 2. To mitigate a tampering threat, compare it to the known hash. Once the values match, he knows
we need to use a data integrity technology. The best choice here is he’s found the correct key and can now attack the system at will. The
still a form of cryptography, and it’s still built into ASP.NET, but only problem with this is the sheer number of possible values: The
instead of using a symmetric algorithm to encrypt the data, we’ll default key size is 512 bits, which means there are 2 to the power of
use a hash algorithm to create a message authentication code 512 different possibilities, which is so large a number that a brute
(MAC) for the data. force attack is completely unfeasible.
The ASP.NET feature to apply a MAC is called EnableView-
StateMac, and just like ViewStateEncryptionMode, you can
Exploiting MAC-Less View State
apply it either through a page directive or through the applica-
The default value of EnableViewStateMac is true, so protecting your
tion’s web.config file:
applications is as simple as not setting it to false. Unfortunately, there
<%@ Page EnableViewStateMac="true" %>
is some misleading documentation concerning the performance
Or impact of EnableViewStateMac, and some Web sites are encour-
<configuration>
<system.web>
aging developers to disable view state MAC in order to improve
<pages enableViewStateMac="true"> the performance of their applications. Even the MSDN online
To understand what EnableViewStateMac is really doing under documentation for PagesSection.EnableViewStateMacProperty
the covers, let’s first take a high-level look at how view state is written is guilty of this, stating: “Do not set EnableViewStateMac to true
to the page when view state MAC is not enabled: if performance is a key consideration.” Do not follow this advice!
1. View state for the page and all participating controls is gathered (Hopefully, by the time you’re reading this, the documentation
into a state graph object. will have been changed to better reflect security considerations.)
2. The state graph is serialized into a binary format.

DES is explicitly banned by the


3. The serialized byte array is encoded into a base-64 string.
4. The base-64 string is written to the __VIEWSTATE form

Microsoft SDL Cryptographic


value in the page.
When view state MAC is enabled, there are three additional steps

Standards and 3DES is strongly


that take place between the previous steps 2 and 3:
1. View state for the page and all participating controls is gathered

discouraged. I recommend
into a state graph object.
2. The state graph is serialized into a binary format.

that you stick with AES for


a. A secret key value is appended to the serialized byte array.
b. A cryptographic hash is computed for the new serialized

maximum security.
byte array.
c. The hash is appended to the end of the serialized byte array.
3. The serialized byte array is encoded into a base-64 string.
4. The base-64 string is written to the __VIEWSTATE form value Any page that has its view state MAC-disabled is potentially
in the page. vulnerable to a cross-site scripting attack against the __VIEW-
Whenever this page is posted back to the server, the page code STATE parameter. The first proof-of-concept of this attack was
validates the incoming __VIEWSTATE by taking the incoming developed by David Byrne of Trustwave, and demonstrated by
state graph data (deserialized from the __VIEWSTATE value), Byrne and his colleague Rohini Sulatycki at the Black Hat DC
adding the same secret key value, and recomputing the hash conference in February 2010. To execute this attack, the attacker
value. If the new recomputed hash value matches the hash value crafts a view state graph where the malicious script code he wants
supplied at the end of the incoming __VIEWSTATE, the view to execute is set as the persisted value of the innerHtml property of
state is considered valid and processing proceeds (see Figure 3). the page’s form element. In XML form, this view state graph would
Otherwise, the view state is considered to have been tampered with look something like Figure 4.
and an exception is thrown. The attacker then base-64 encodes the malicious view state
The security of this system lies in the secrecy of the secret key and appends this string as the value of a __VIEWSTATE query
value. This value is always stored on the server, either in memory string parameter for the vulnerable page. For example, if the page
84 msdn magazine Security Briefs
Figure 4 XML Code for View State MAC Attack HMACSHA384 or HMACSHA512. Of these choices, MD5 is
<viewstate>
explicitly banned by the SDL Crypto Standards and 3DES is
<Pair> strongly discouraged. SHA1 is also discouraged, but for .NET
<Pair>
<String>…</String>
Framework 3.5 and earlier applications it’s your best option. .NET
<Pair> Framework 4 applications should definitely be configured with either
<ArrayList>
<Int32>0</Int32>
HMACSHA512 or HMACSHA256 as the validation algorithm.
<Pair> After you choose a MAC algorithm, you’ll also need to manually
<ArrayList>
<Int32>1</Int32>
specify the validation key. Remember to use cryptographically
<Pair> strong random numbers: if necessary, you can refer to the key
<ArrayList>
<IndexedString>innerhtml</IndexedString>
generation code specified earlier. You should use at least 128-byte
<String>…malicious script goes here…</String> validation keys for either HMACSHA384 or HMACSHA512, and
</ArrayList>
</Pair>
at least 64-byte keys for any other algorithm.
</ArrayList>
</Pair>
</ArrayList>
You Can’t Hide Vulnerable View State
</Pair> Unlike a vulnerable file permission or database command that
</Pair>
</Pair>
may be hidden deep in the server-side code, vulnerable view state
</viewstate> is easy to find just by looking for it. If an attacker wanted to test a
page to see whether its view state was protected, he could simply
home.aspx on the site www.contoso.com was known to have view state make a request for that page himself and pull the base-64 encoded
MAC disabled, the attack URI would be http://www.contoso.com/ view state value from the __VIEWSTATE form value. If the
home.aspx?__VIEWSTATE=/w143a... LosFormatter class can successfully deserialize that value, then
All that remains is to trick a potential victim into following this it has not been encrypted. It’s a little trickier—but not much—to
link. Then the page code will deserialize the view state from the determine whether view state MAC has been applied.
incoming __VIEWSTATE query string parameter and write the The MAC is always applied to the end of the view state value, and
malicious script as the innerHtml of the form. When the victim since hash sizes are constant for any given hash algorithm, it’s fairly
gets the page, the attacker’s script will immediately execute in the easy to determine whether a MAC is present. If HMACSHA512
victim’s browser, with the victim’s credentials. has been used, the MAC will be 64 bytes; if HMACSHA384 has
This attack is especially dangerous because it completely bypasses been used, it will be 48 bytes, and if any other algorithm has been
all of the usual cross-site scripting (XSS) defenses. The XSS Filter used it will be 32 bytes. If you strip 32, 48 or 64 bytes off of the end
in Internet Explorer 8 will not block it. The ValidateRequest feature of the base-64 decoded view state value, and any of these deseri-
of ASP.NET will block several common XSS attack vectors, but it alize with LosFormatter into the same object as before, then view
does not deserialize and analyze incoming view state, so it’s also state MAC has been applied. If none of these trimmed view state
no help in this situation. The Microsoft Anti-Cross Site Scripting byte arrays will successfully deserialize, then view state MAC hasn’t
(Anti-XSS) Library (now included as part of the Microsoft Web been applied and the page is vulnerable.
Protection Library) is even more effective against XSS than Casaba Security makes a free tool for developers called Watcher
ValidateRequest; however, neither the Anti-XSS Library input that can help automate this testing. Watcher is a plug-in for Eric
sanitization features nor its output encoding features will protect Lawrence’s Fiddler Web debugging proxy tool, and it works by
against this attack either. The only real defense is to ensure that passively analyzing the HTTP traffic that flows through the proxy.
view state MAC is consistently applied to all pages. It will flag any potentially vulnerable resources that pass through—
for example, an .aspx page with a __VIEWSTATE missing a MAC.
More Server Farm Considerations If you’re not already using both Fiddler and Watcher as part of your
Similar to ViewStateEncryptionMode, there are special consider- testing process, I highly recommend giving them a try.
ations with EnableViewStateMac when deploying applications in
a server farm environment. The secret value used for the view state Wrapping Up
hash must be constant across all machines in the farm, or the view View state security is nothing to take lightly, especially consider-
state validation will fail. ing the new view state tampering attacks that have recently been
You can specify both the validation key and the HMAC algorithm demonstrated. I encourage you to take advantage of the ViewState-
to use in the same location where you specify the view state encryption EncryptionMode and EnableViewStateMac security mechanisms
key and algorithm—the machineKey element of the web.config file: built into ASP.NET. „
<configuration>
<system.web>
<machineKey validation="AES" validationKey="143a..."> BRYAN SULLIVAN is a security program manager for the Microsoft Security
Development Lifecycle team, where he specializes in Web application security
If your application is built on the .NET Framework 3.5 or earlier,
issues. He’s the author of “Ajax Security” (Addison-Wesley, 2007).
you can choose SHA1 (the default value), AES, MD5 or 3DES as
the MAC algorithm. If you’re running .NET Framework 4, you THANKS to the following technical expert for reviewing this article:
can also choose MACs from the SHA-2 family: HMACSHA256, Michael Howard
msdnmagazine.com July 2010 85
Untitled-4 2 6/8/10 11:38 AM
Untitled-4 3 6/8/10 11:38 AM
THE WORKING PROGRAMMER TED NEWARD

Going NoSQL with MongoDB, Part 3

Last time, I continued my exploration of MongoDB via the use of in the foreach loop), contains a Documents property that’s an
exploration tests. I described how to start and stop the server during IEnumerable<Document>. If the query would return too large a
a test, then showed how to capture cross-document references and set of data, the ICursor can be limited to return the first n results
discussed some of the reasoning behind the awkwardness of doing by setting its Limit property to n.
so. Now it’s time to explore some more intermediate MongoDB The predicate query syntax comes in four different flavors,
capabilities: predicate queries, aggregate functions and the LINQ shown in Figure 2.
support provided by the MongoDB.Linq assembly. I’ll also provide In the second and third forms, “this” always refers to the object
some notes about hosting MongoDB in a production environment. being examined.
You can send any arbitrary command (that is, ECMAScript code)
When We Last Left Our Hero . . . through the driver to the database, in fact, using documents to convey
For reasons of space, I won’t review much of the previous articles; the query or command. So, for example, the Count method provided
instead, you can read them online in the May and June issues by the IMongoCollection interface is really just a convenience around
at msdn.microsoft.com/magazine . In the associated code bundle, this more verbose snippet:
however, the exploration tests have been fleshed out to include a [TestMethod]
public void CountGriffins()
pre-existing sample set of data to work with, using characters from {
one of my favorite TV shows. Figure 1 shows a previous exploration var resultDoc = db["exploretests"].SendCommand(
new Document()
test, by way of refresher. So far, so good. .Append("count", "familyguy")
.Append("query",
Calling All Old People . . . );
new Document().Append("lastname", "Griffin"))

In previous articles, the client code has fetched either all documents Assert.AreEqual(6, (double)resultDoc["n"]);
}
that match a particular criteria (such as having a “lastname” field
This means that any of the aggregate operations described by
matching a given String or an “_id” field matching a particular Oid),
the MongoDB documentation, such as “distinct” or “group,” for
but I haven’t discussed how to do predicate-style queries (such as
example, are accessible via the same mechanism, even though they
“find all documents where the ‘age’ field has a value higher than
may not be surfaced as methods on the MongoDB.Driver APIs.
18”). As it turns out, MongoDB doesn’t use a SQL-style interface
You can send arbitrary commands outside of a query to the
to describe the query to execute; instead, it uses ECMAScript/
database via the “special-name” syntax “$eval,” which allows any
JavaScript, and it can in fact accept blocks of code to execute on
legitimate ECMAScript block of code to be executed against the
the server to filter or aggregate data, almost like a stored procedure.
server, again essentially as a stored procedure:
This provides some LINQ-like capabilities, even before looking
[TestMethod]
at the LINQ capabilities supported by the Mongo.Linq assembly. public void UseDatabaseAsCalculator()
By specifying a document containing a field named “$where” and a {
var resultDoc = db["exploretests"].SendCommand(
code block describing the ECMAScript code to execute, arbitrarily new Document()
complex queries can be created: .Append("$eval",
new CodeWScope {
[TestMethod] Value = "function() { return 3 + 3; }",
public void Where() Scope = new Document() }));
{ TestContext.WriteLine("eval returned {0}", resultDoc.ToString());
ICursor oldFolks = Assert.AreEqual(6, (double)resultDoc["retval"]);
db["exploretests"]["familyguy"].Find( }
new Document().Append("$where",
new Code("this.gender === 'F'"))); Or, use the provided Eval function on the database directly.
bool found = false; If that isn’t flexible enough, MongoDB permits the storage of
foreach (var d in oldFolks.Documents)
found = true; user-defined ECMAScript functions on the database instance
Assert.IsTrue(found, "Found people");
}
As you can see, the Find call returns an ICursor instance, Code download available at code.msdn.microsoft.com/mag201007WorkProg.
which, although itself isn’t IEnumerable (meaning it can’t be used
88 msdn magazine
Figure 1 An Example Exploration Test cantly better. Documentation of the new features and examples will
[TestMethod]
be in the wiki of the project site (wiki.github.com/samus/mongodb-csharp/).
public void StoreAndCountFamilyWithOid()
{
var oidGen = new OidGenerator();
Shipping Is a Feature
var peter = new Document(); Above all else, if MongoDB is going to be used in a production environ-
peter["firstname"] = "Peter";
peter["lastname"] = "Griffin";
ment, a few things need to be addressed to make it less painful for the poor
peter["_id"] = oidGen.Generate(); chaps who have to keep the production servers and services running.
var lois = new Document();
To begin, the server process (mongod.exe) needs to be installed
lois["firstname"] = "Lois"; as a service—running it in an interactive desktop session is typically
lois["lastname"] = "Griffin";
lois["_id"] = oidGen.Generate();
not allowed on a production server. To that end, mongod.exe
supports a service install option, “--install,” which installs it as a
peter["spouse"] = lois["_id"];
lois["spouse"] = peter["_id"];
service that can then be started either by the Services panel or the
command line: “net start MongoDB.” However, as of this writing,
var cast = new[] { peter, lois };
var fg = db["exploretests"]["familyguy"];
there’s one small quirk in the --install command—it infers the path
fg.Insert(cast); to the executable by looking at the command line used to execute
Assert.AreEqual(peter["spouse"], lois["_id"]);
it, so the full path must be given on the command line. This means
Assert.AreEqual( that if MongoDB is installed in C:\Prg\mongodb, you must install
fg.FindOne(new Document().Append("_id",
peter["spouse"])).ToString(), lois.ToString());
it as a service at a command prompt (with administrative rights)
with the command C:\Prg\mongodb\bin\mongod.exe --install.
Assert.AreEqual(2,
fg.Count(new Document().Append("lastname", "Griffin")));
However, any command-line parameters, such as “--dbpath, ”
} must also appear in that installation command, which means if
any of the settings—port, path to the data files and so on—change,
the service must be reinstalled. Fortunately, MongoDB supports a
for execution during queries and server-side execution blocks by configuration file option, given by the “--config” command-line
adding ECMAScript functions to the special database collection option, so typically the best approach is to pass the full config file path
“system.js,” as described on the MongoDB Web site (MongoDB.org). to the service install and do all additional configuration from there:
C:\Prg\mongodb\bin\mongod.exe --config C:\Prg\mongodb\bin\mongo.cfg --install

The Missing LINQ net start MongoDB


As usual, the easiest way to test to ensure the service is running
The C# MongoDB driver also has LINQ support, allowing developers
successfully is to connect to it with the mongo.exe client that ships
to write MongoDB client code such as what’s shown in Figure 3.
with the MongoDB download. And, because the server commu-
And, in keeping with the dynamic nature of the MongoDB data-
nicates with the clients via sockets, you need to poke the required
base, this sample requires no code-generation, just the call to Linq to
holes in the firewall to permit communication across servers.
return an object that “enables” the MongoDB LINQ provider. At the
time of this writing, LINQ support is fairly rudimentary, but it’s being
improved and by the time this article reaches print, it will be signifi-
These Aren’t the Data Droids You’re Looking For
Of course, unsecured access to the MongoDB server isn’t likely to
Figure 2 Four Different Predicate Query Syntaxes be a good thing, so securing the server against unwanted visitors
becomes a key feature. MongoDB supports authentication, but the
[TestMethod] security system isn’t anywhere near as sophisticated as that found
public void PredicateQuery()
{ with “big iron” databases such as SQL Server.
ICursor oldFolks = Typically, the first step is to create a database admin login by con-
db["exploretests"]["familyguy"].Find(
new Document().Append("age", necting to the database with the mongo.exe client and adding an
new Document().Append("$gt", 18)));
Assert.AreEqual(6, CountDocuments(oldFolks));
Figure 3 An Example of LINQ Support
oldFolks =
db["exploretests"]["familyguy"].Find( [TestMethod]
new Document().Append("$where", public void LINQQuery()
new Code("this.age > 18"))); {
Assert.AreEqual(6, CountDocuments(oldFolks)); var fg = db["exploretests"]["familyguy"];
var results =
oldFolks = from d in fg.Linq()
db["exploretests"]["familyguy"].Find("this.age > 18"); where ((string)d["lastname"]) == "Brown"
Assert.AreEqual(6, CountDocuments(oldFolks)); select d;
bool found = false;
oldFolks = foreach (var d in results)
db["exploretests"]["familyguy"].Find( {
new Document().Append("$where", found = true;
new Code("function(x) { return this.age > 18; }"))); TestContext.WriteLine("Found {0}", d);
Assert.AreEqual(6, CountDocuments(oldFolks)); }
} Assert.IsTrue(found, "No Browns found?");
}

msdnmagazine.com July 2010 89


Naturally, passwords shouldn’t be hardcoded directly
into the code or stored openly; use the same password
discipline as befits any database-backed application.
In fact, the entire configuration (host, port, password
and so on) should be stored in a configuration file and
retrieved via the ConfigurationManager class.

Reaching Out to Touch Some Code


Periodically, administrators will want to look at the
running instance to obtain diagnostic information
about the running server. MongoDB supports an HTTP
interface for interacting with it, running on a port
numerically 1,000 higher than the port it’s configured
to use for normal client communication. Thus, because
the default MongoDB port is 27017, the HTTP inter-
face can be found on port 28017, as shown in Figure 4.
This HTTP interface also permits a more REST-style
communication approach, as opposed to the native
driver in MongoDB.Driver and MongoDB.Linq; the
MongoDB Web site has full details, but essentially the
HTTP URL for accessing a collection’s contents is given
by adding the database name and collection name,
separated by slashes, as shown in Figure 5.
Figure 4 The HTTP Interface for Interacting with MongoDB For more details on creating a REST client using WCF,
refer to the MSDN article “REST in Windows Com-
munication Foundation (WCF)” at msdn.microsoft.com/
netframework/cc950529.

A Word from Yoda


MongoDB is a quickly evolving product and these
articles, while exploring core parts of MongoDB’s
functionality, still leave major areas unexamined. While
MongoDB isn’t a direct replacement for SQL Server, it’s
proving to be a viable storage alternative for areas where
the traditional RDBMS doesn’t fare so well. Similarly,
just as MongoDB is an evolution in progress, so is the
mongodb-csharp project.  At the time of this writing, many
Figure 5 The HTTP URL for Accessing a Collection’s Contents new improvements were going into beta, including enhance-
ments for working with strongly typed collections using plain
admin user to the admin database (a database containing data for objects, as well as greatly improved LINQ support. Keep an eye on both.
running and administering the entire MongoDB server), like so: In the meantime, however, it’s time to wave farewell to MongoDB
> use admin and turn our attention to other parts of the developer’s world that
> db.addUser("dba", "dbapassword")
the working programmer may not be familiar with (and arguably
Once this is done, any further actions, even within this shell,
should be). For now, though, happy coding, and remember, as the
will require authenticated access, which is done in the shell by
great DevGuy Master Yoda once said, “A DevGuy uses the Source
explicit authentication:
for knowledge and defense; never for a hack.” „
> db.authenticate("dba", "dbapassword")
The DBA can now add users to a MongoDB database by changing
TED NEWARD is a principal with Neward & Associates, an independent firm
databases and adding the user using the same addUser call shown earlier:
specializing in enterprise Microsoft .NET Framework and Java platform sys-
> use mydatabase
> db.addUser("billg", "password")
tems. He’s written more than 100 articles, is a C# MVP, INETA speaker and the
author and coauthor of a dozen books, including “Professional F# 2.0” (Wrox,
When connecting to the database via the Mongo.Driver, pass the 2010). He consults and mentors regularly. Reach him at [email protected] and
authentication information as part of the connection string used read his blog at blogs.tedneward.com.
to create the Mongo object and the same authentication magic will
happen transparently: THANKS to the following technical experts for reviewing this article:
var mongo = new Mongo("Username=billg;Password=password"); Sam Corder and Craig Wilson

90 msdn magazine The Working Programmer


AUGUST 2-6, 2010
REDMOND, WA | MICROSOFT CAMPUS

HOW TO BE
A GOOD BOSS
You expect a lot from your development team. As their boss,
give them the tools they need to meet your business objectives
on time and on budget.  
VSLive! offers 70 conference sessions and workshops over five
code-packed days. Each session is filled with actionable,
applicable knowledge — automatically making your team
more efficient and productive.
This year, VSLive! is being held on the Microsoft campus.
Attendees will have unprecedented access to the Visual Studio
development team for questions, tips, and expert advice on
how to harness the power of Visual Studio 2010.
Be a good boss. Have your team check out the full agenda
online at vslive.com/agenda.
Let them tell you why they should be there and how your
investment in them will help your bottom line.

For more information on how to invest in your team, go to


www.vslive.com/boss

SUPPORTED BY: PLATINUM SPONSORS: GOLD SPONSORS:

Untitled-2 1 6/8/10 10:20 AM


UI FRONTIERS CHARLES PETZOLD

The Fluid UI in Silverlight 4


The term “fluid UI” has recently become common to describe UI example, you don’t have to figure out how the ListBoxItem should
design techniques that avoid having visual objects suddenly pop appear when the mouse is hovering over a selected but unfocused
into view or jump from one location to another. Instead, visually item; each group can be handled independently of the others.
fluid objects make more graceful entrances and transitions— The code part of ListBoxItem is responsible for changing visual
sometimes as if emerging from fog or sliding into view. states through calls to the static VisualStateManager.GoToState
In the past two installments of this column, I’ve discussed some method. The control template for ListBoxItem is responsible
techniques implementing fluid UI on your own. I was partially for responding to these visual states. The template responds to a
inspired by the upcoming introduction of a fluid UI feature in particular visual state change with a single Storyboard containing
Silverlight 4. Now that Silverlight 4 has been officially released, that’s one or more animations that target elements in the visual tree. If
what I’ll be covering here. Silverlight 4’s foray into fluid UI is rather you want the control to respond to a visual state change immediately
narrowly confined—it’s restricted to the loading and unloading without an animation, you can simply define the animation with a
of items in a ListBox—but it gives us some important hints on how duration of 0. But why bother? It’s just as easy to use an animation
to extend fluid UI techniques with our own implementations. More to help make the control’s visuals more fluid.
fluid UI behaviors are available in Expression Blend 4.

Templates and the VSM If you don’t know exactly where


If you don’t know exactly where to find the new fluid UI feature in
Silverlight 4, you might search for many hours. It’s not a class. It’s not a to find the new fluid UI feature in
property. It’s not a method. It’s not an event. It’s actually implemented
as three new visual states on the ListBoxItem class. Figure 1 shows the Silverlight 4, you might search
documentation for that class, with the TemplateVisualState attribute
items slightly rearranged in accordance with the group names. for many hours.
The Visual State Manager (VSM) is one of the most significant
changes made to Silverlight as it was being adapted from Windows The new visual states for supporting fluid UI are BeforeLoaded,
Presentation Foundation. In WPF, a style or a template (almost AfterLoaded and BeforeUnloaded, all part of the LayoutStates group.
always defined in XAML) can include elements called triggers. These By associating animations to these visual states, you can make items
triggers are defined to detect either a property change or an event, in your ListBox fade in, or grow or glide into view when they’re
and then initiate an animation or a change to another property. first added to the ListBox, and do something else when they’re
For example, a style definition for a control can include a trigger removed from the ListBox.
for the IsMouseOver property that sets the background of the
control to a blue brush when the property is true. Or a trigger for Adapting the ListBoxItem Template
the MouseEnter and MouseLeave events can initiate a couple of Most programmers will probably access the fluid UI feature of
brief animations when those events occur. ListBoxItem through Expression Blend, but I’m going to show you
In Silverlight, triggers have been largely banished and replaced how to do it directly in markup.
with the VSM, partially to provide a more structured approach to The default control template for ListBoxItem has no animations
dynamically changing the characteristics of a control at run time, associated with the visual states in the LayoutStates group. That’s
and partially to avoid dealing with all the different combinations your job. Unfortunately, you can’t just “derive from” the existing
of possibilities when multiple triggers are defined. The VSM is ListBoxItem template and supplement it with your own stuff. You
considered to be such an improvement over triggers that it has must include the whole template in your program. Fortunately, it’s a
become part of WPF in the Microsoft .NET Framework 4. simple matter of copy and paste. In the Silverlight 4 documentation,
As you can see in Figure 1, the ListBoxItem control supports 11 look in the Controls section, and then Control Customization, and
visual states, but they’re apportioned into four groups. Within any
group, one and only one visual state is active at any time. This simple Code download available at code.msdn.microsoft.com/mag201007UIFrontiers.
rule greatly reduces the number of possible combinations. For
92 msdn magazine
Figure 1 The ListBoxItem Class Documentation XML namespace prefix of “vsm.” In earlier versions of Silverlight,
[TemplateVisualStateAttribute(Name = "Normal", GroupName =
it was necessary to define a namespace declaration for that prefix:
"CommonStates")] xmlns:vsm="clr-namespace:System.Windows;assembly=System.Windows"
[TemplateVisualStateAttribute(Name = "MouseOver", GroupName = However, in Silverlight 4 you can simply delete all the vsm prefixes
"CommonStates")]
[TemplateVisualStateAttribute(Name = "Disabled", GroupName = and forget about this namespace declaration. To make changes to
"CommonStates")] this template, you’ll want to copy that whole section of markup into
[TemplateVisualStateAttribute(Name = "Unselected", GroupName =
"SelectionStates")] a resource section of a XAML file and give it a key name:
[TemplateVisualStateAttribute(Name = "Selected", GroupName = <Style x:Key="listBoxItemStyle" TargetType="ListBoxItem">
"SelectionStates")] ...
[TemplateVisualStateAttribute(Name = "SelectedUnfocused", GroupName = </Style>
"SelectionStates")]
[TemplateVisualStateAttribute(Name = "Unfocused", GroupName = You then set this style to the ItemContainerStyle property of
"FocusStates")] the ListBox:
[TemplateVisualStateAttribute(Name = "Focused", GroupName =
"FocusStates")] <ListBox ... ItemContainerStyle="{StaticResource listBoxItemStyle}" ....
[TemplateVisualStateAttribute(Name = "BeforeLoaded", GroupName = The “item container” is the object the ListBox creates as a wrapper
"LayoutStates")]
[TemplateVisualStateAttribute(Name = "AfterLoaded", GroupName = for each item in the ListBox, and that’s an object of type ListBoxItem.
"LayoutStates")] Once you have this ListBoxItem style and template in your
[TemplateVisualStateAttribute(Name = "BeforeUnloaded", GroupName =
"LayoutStates")] program, you can make changes to it.
public class ListBoxItem : ContentControl

Fade in, Fade Out


Control Styles and Templates, and ListBox Styles and Templates. Let’s see how this works in the context of a simple demo program.
You’ll find the default style definition for ListBoxItem (which The downloadable code for this article is a solution entitled Fluid-
includes the template definition) in the markup that begins: UserInterfaceDemo. It consists of two programs, which you can run
<Style TargetType="ListBoxItem"> from my Web site at charlespetzold.com/silverlight/FluidUserInterfaceDemo.
Under the Setter element for the Template property, you’ll see Both programs are on the same HTML page, each occupying the
the entire ControlTemplate used to build a visual tree for each whole browser window.
ListBoxItem. The root of the visual tree is a single-cell Grid. The The first program is FluidListBox. Visually, it consists of a
VSM markup occupies a large part of the template at the top of ListBox and two buttons to add and remove items. I’ve used the
the Grid definition. At the bottom are the actual contents of the same collection of grocery produce that I’ve used in my last two
Grid: three Rectangle shapes (two filled and one just stroked) and columns, so MainPage.xaml also contains a DataTemplate named
a ContentPresenter, like so: produceDataTemplate.
<Grid ... > I decided I wanted to start off simple and have the items fade
...
<Rectangle x:Name="fillColor" ... />
into view when they’re added to the ListBox and fade out when
<Rectangle x:Name="fillColor2" ... /> they’re removed. This involves animating the Opacity property of
<ContentPresenter x:Name="contentPresenter" ... />
<Rectangle x:Name="FocusVisualElement" ... />
the Grid that forms the root of the visual tree. To be the target of
</Grid> an animation, that Grid needs a name:
<Grid Name="rootGrid" ...>

The Visual State Manager is one


First insert a new VisualStateGroup within the VisualState-
Manager.VisualStateGroups tags:

of the most significant changes


<VisualStateGroup x:Name="LayoutStates">
...
</VisualStateGroup>

made to Silverlight as it was That’s where the markup goes for the BeforeLoaded, AfterLoaded
and BeforeUnloaded states in the LayoutStates group.
being adapted from WPF. The fade-in is the easier of the two jobs. When an item is first
added to the visual tree, it’s said to be “loaded” into the visual tree.
Prior to being loaded, the item has a visual state of BeforeLoaded,
and then the visual state becomes AfterLoaded.
The first two filled Rectangle objects are used to provide back- There are several ways to define the fade-in. The first requires
ground shading for mouse-over and selection (respectively). The third initializing the Opacity to 0 in the Grid tag:
displays a stroked rectangle to indicate input focus. The visibility of these <Grid Name="rootGrid" Opacity="0" ... >
rectangles is controlled by the VSM markup. Notice how each visual You then provide an animation for the AfterLoaded state to
group gets its own element to manipulate. The ContentPresenter hosts increase the Opacity property to 1 over the course of 1 second:
the item as it’s displayed in the ListBox. Generally, the content of the <VisualState x:Name="AfterLoaded">
<Storyboard>
ContentPresenter is another visual tree defined in a DataTemplate <DoubleAnimation Storyboard.TargetName="rootGrid"
that’s set to the ItemTemplate property of ListBox. Storyboard.TargetProperty="Opacity"
To="1" Duration="0:0:1" />
The VSM markup consists of elements of type VisualStateManager. </Storyboard>
VisualStateGroups, VisualStateGroup and VisualState, all with an </VisualState>

msdnmagazine.com July 2010 93


Or you can leave the Grid opacity at its default value of 1 and that work. The first defines an animation for the BeforeUnloaded
provide animations for both BeforeLoaded and AfterLoaded: state together with a transition for that state:
<VisualState x:Name="BeforeLoaded"> <VisualState x:Name="BeforeUnloaded">
<Storyboard> <Storyboard>
<DoubleAnimation Storyboard.TargetName="rootGrid" <DoubleAnimation Storyboard.TargetName="rootGrid"
Storyboard.TargetProperty="Opacity" Storyboard.TargetProperty="Opacity"
To="0" Duration="0:0:0" /> To="0" Duration="0:0:1" />
</Storyboard> </Storyboard>
</VisualState> </VisualState>

<VisualState x:Name="AfterLoaded"> <VisualStateGroup.Transitions>


<Storyboard> <VisualTransition From="AfterLoaded"
<DoubleAnimation Storyboard.TargetName="rootGrid" To="BeforeUnloaded"
Storyboard.TargetProperty="Opacity" GeneratedDuration="0:0:1" />
To="1" Duration="0:0:1" /> </VisualStateGroup.Transitions>
</Storyboard>
</VisualState>
Notice that the Duration on the BeforeLoaded state is 0, which
effectively just sets the Opacity property to 0. Using a whole Story- Because the fluid UI feature is
board and DoubleAnimation just to set a property might seem like
overkill, but it also demonstrates the flexibility of animations. The implemented as visual states on
overhead is actually not very much.
The approach I personally prefer—primarily because it’s the ListBoxItem, it isn’t available in
simplest—is to leave the Opacity property of the Grid at its default
value of 1 and provide only an animation for the AfterLoaded state the ItemsControl.
with a From value specified, rather than a To value:
<VisualState x:Name="AfterLoaded">
<Storyboard> The second approach defines an empty tag for the BeforeUnloaded
<DoubleAnimation Storyboard.TargetName="rootGrid" state and an animation for the VisualTransition:
Storyboard.TargetProperty="Opacity"
From="0" Duration="0:0:1" /> <VisualState x:Name="BeforeUnloaded" />
</Storyboard>
</VisualState> <VisualStateGroup.Transitions>
<VisualTransition From="AfterLoaded"
Now the animation goes from the value of 0 to its base value, To="BeforeUnloaded"
which is 1. You can use this identical technique with the Before- GeneratedDuration="0:0:1">
<Storyboard>
Loaded state. But watch out: The BeforeLoaded state occurs after <DoubleAnimation Storyboard.TargetName="rootGrid"
the ListBoxItem is created and initialized, but before it’s added to Storyboard.TargetProperty="Opacity"
To="0" Duration="0:0:1" />
the visual tree, at which point the AfterLoaded state occurs. That’s </Storyboard>
just a tiny gap of time. You’ll get into trouble if you define an ani- </VisualTransition>
</VisualStateGroup.Transitions>
mation for BeforeLoaded but also define an empty VisualState tag
Figure 2 shows the completed markup for the AfterLoaded and
for AfterLoaded:
BeforeUnloaded states as they appear in the ListBoxItem template
<VisualState x:Name="BeforeLoaded">
<Storyboard> in the MainPage.xaml file of the FluidListBox project.
<DoubleAnimation Storyboard.TargetName="rootGrid" One more warning: By default, the ListBox stores its items in
Storyboard.TargetProperty="Opacity"
From="0" Duration="0:0:1" /> a VirtualizingStackPanel. This means the actual items and their
</Storyboard> containers aren’t generated until they’re required to be visually
</VisualState>
displayed. If you define an animation for the AfterLoaded state, and
<VisualState x:Name="AfterLoaded" /> then fill the ListBox up with items, the items will fade in as they’re
As soon as the item is loaded, the storyboard for BeforeLoaded scrolled into view. This is probably undesirable. The easy solution
is terminated and you’ll get no fade-in effect. However, you can is to replace the VirtualizingStackPanel with a regular StackPanel.
make that markup work if you also add the following: The required markup on the ListBox is trivial:
<VisualStateGroup.Transitions> <ListBox.ItemsPanel>
<VisualTransition From="BeforeLoaded" <ItemsPanelTemplate>
To="AfterLoaded" <StackPanel />
GeneratedDuration="0:0:1" /> </ItemsPanelTemplate>
</VisualStateGroup.Transitions> </ListBox.ItemsPanel>
This defines a one-second transition period between the Before-
Loaded and the AfterLoaded states. That transition period gives the Extending to ItemsControl
BeforeLoaded animation time to complete before the AfterLoaded Because the fluid UI feature is implemented as visual states on
state shuts it off. ListBoxItem, it isn’t available in the ItemsControl. As you know,
The fade-out process isn’t quite as straightforward. When the ItemsControl simply displays a collection of items and lets the
item is about to be removed from the ListBox, the BeforeUnloaded user navigate through them. There’s no concept of selection or
state is set, but then the item is immediately removed so any input focus among the items. For that reason, ItemsControl doesn’t
animation that began won’t be visible! I’ve found two approaches require a special class like ListBoxItem to host the items. It just
94 msdn magazine UI Frontiers
uses a ContentPresenter. Because ContentPresenter derives from Figure 3 The FluidableItemsControl Class
FrameworkElement rather than Control, it doesn’t have a template using System.Windows;
in which to define the behavior of visual states. using System.Windows.Controls;
What you can do, however, is derive a class from ItemsControl namespace FluidItemsControl
that uses ListBoxItem to host its items. This is actually much easier {
than you might assume. Figure 3 shows the entire code for public class FluidableItemsControl : ItemsControl
{
FluidableItemsControl. public static readonly DependencyProperty ItemContainerStyleProperty =
DependencyProperty.Register("ItemContainerStyle",
typeof(Style),

It’s time that we application


typeof(FluidableItemsControl),
new PropertyMetadata(null));

developers started considering


public Style ItemContainerStyle
{
set { SetValue(ItemContainerStyleProperty, value); }

implementing our own visual


get { return (Style)GetValue(ItemContainerStyleProperty); }
}

states for custom behavior.


protected override DependencyObject GetContainerForItemOverride()
{
ListBoxItem container = new ListBoxItem();

if (ItemContainerStyle != null)
The crucial method is GetContainerForItemOverride. This container.Style = ItemContainerStyle;
method returns the object used to wrap each item. ItemsControl
return container;
returns ContentPresenter, but ListBox returns ListBoxItem, and }
that’s what FluidableItemsControl returns as well. This ListBoxItem
protected override bool IsItemItsOwnContainerOverride(object item)
must have a style applied, and for that reason FluidableItemsControl {
also defines the same ItemContainerStyle property as ListBox. return item is ListBoxItem;
}
The other method that should be implemented is IsItemItsOwn- }
ContainerOverride. If the item in the ItemsControl is already the }
same type as its container (in this case, a ListBoxItem), then there’s
no reason to put it in another container. Now you can set a List- be drastically simplified. It doesn’t need logic for mouse-over, selec-
BoxItem style definition to the ItemContainerStyle property of tion or input focus, so all those visual states can be eliminated, as well
FluidableItemsControl. The template within the style definition can as the three Rectangle objects.
The FluidItemsControl program shows the result. It’s pretty much
Figure 2 An Excerpt from the the same as FluidListBox but with all the ListBox selection logic
ListBoxItem Template in FluidListBox absent. The default panel for ItemsControl is a StackPanel, so that’s
another simplification. To compensate for these simplifications,
<ControlTemplate TargetType="ListBoxItem">
<Grid Name="rootGrid" Background="{TemplateBinding Background}"> I’ve enhanced the animations for loading and unloading items.
<VisualStateManager.VisualStateGroups> Now there’s an animation on the PlaneProjection transform that
<!-- Additions to standard template --> makes it appear as if the items are swiveling into and out of view.
<VisualStateGroup x:Name="LayoutStates">

<VisualState x:Name="AfterLoaded"> Limitations and Suggestions


<Storyboard> Even with the facility to define animations on items in an Items-
<DoubleAnimation Storyboard.TargetName="rootGrid"
Storyboard.TargetProperty="Opacity" Control or ListBox, there still exists a crucial limitation: If the control
From="0" Duration="0:0:1" /> incorporates a ScrollViewer, you can’t define transforms that take the
</Storyboard>
</VisualState> item out of the box. The ScrollViewer imposes a severe clipping region
that simply can’t be transgressed (as far as I’ve been able to determine).
<VisualState x:Name="BeforeUnloaded" />
This means that techniques such as those I demonstrated in last month’s
<VisualStateGroup.Transitions> column are still valid and important in Silverlight 4.
<VisualTransition From="AfterLoaded"
To="BeforeUnloaded" But the use of the VSM to implement this fluid UI feature in
GeneratedDuration="0:0:1"> Silverlight 4 is a good indication that the VSM is likely to play an
<Storyboard>
<DoubleAnimation Storyboard.TargetName="rootGrid" increasingly important role in the future to link code and XAML.
Storyboard.TargetProperty="Opacity" It’s time that we application developers started considering imple-
To="0" Duration="0:0:1" />
</Storyboard> menting our own visual states for custom behavior. „
</VisualTransition>
</VisualStateGroup.Transitions>
</VisualStateGroup>
<!-- End of additions to standard template --> CHARLES PETZOLD is a longtime contributing editor to MSDN Magazine. He’s
... currently writing “Programming Windows Phone 7 Series,” which will be published
</Grid>
</ControlTemplate>
as a free downloadable e-book in the fall of 2010. A preview edition is currently
available through his Web site, charlespetzold.com.
msdnmagazine.com July 2010 95
DON’T GET ME STARTED DAVID S. PLATT

Rejectionists Rejected

The response to my last column, calling on Microsoft to publish


UI standards for Windows Presentation Foundation (WPF) and
Silverlight, was quite gratifying, both in praise and scorn. Many
readers loved it; others detested it and said so quite loudly. Here
are the most strident objections, with my refutations.
Some readers hated the idea of any sort of standard. “Plattski, you
Luddite, shut up,” they wrote, “we don’t need no stinkin’ standards.
That’s so 20th century. We’ll do things that are cool and users will
love them because we love cool and users are just like us.” No they’re
not, and no they won’t. As I’ve said before in this space, users don’t
care about your software in and of itself. Never have, never will;
not even your mother. They don’t want cool, they want finished.
Almost all the readers who said this are under age 35. I picture
them rolling their eyes at me, as my daughter, now 10, practices From the Family.Show application. This is what can happen
daily for her approaching teen years. They’ve grown up with UI without standards.
commonality as they’ve grown up with the measles vaccine: Never
experiencing—and rarely even thinking about—the absence of ei- keyboard.” Poppycock. Standards raise the bar for what’s a useful
ther. But I’ve experienced the world both ways, and let me tell you: innovation and what isn’t. If an alternate keyboard layout were that
Learning the UI peccadilloes of different applications at best con- much more efficient, we’d use it. If you can make users happier by
sumes time and effort that could be used more productively, and violating a standard, more power to you. An excellent example
at worst drives a user barking mad when the Save command of one is Microsoft OneNote, which automatically saves documents
program is the Delete command of another. And even among oth- without needing user action. If users like it, it’ll become the new
erwise healthy patients in developed countries, measles kills one standard. Following most of the standards allows the rest of your
or two out of 1,000 patients and permanently damages more. We’re application to work while you present your new innovation to
far better off today having both UI commonality and the measles users for their approval or disapproval. Just know what you’re
vaccine, and giving up either one of them is a bad idea. doing and why you’re doing it.
A second cohort wrote: “We don’t want Microsoft to prescribe Social manners, such as shaking hands or bowing, are behavioral
standards. We want standards to evolve naturally out of the use conventions that help people live and work together harmoniously.
of WPF in our applications.” My response: WPF has been out for As technology advances, we invent new behavioral conventions
four years now. Pioneer companies have spent eons of programmer to cover the innovations; for example, turning off cell phones in a
time and mountains of money on WPF, some of which made users theater. Similarly, UI standards are conventions that help people
happier and some of which made them less happy. and their computer programs live and work together harmoniously.
The Family.Show sample genealogy application from Vertigo As UI technology advances, we need new conventions as to how
offers spectacular examples of both, including excellent subcon- and when to employ its new features to make users more happy—
scious right-brain communication, down to and including the not less. And we need them right now, as WPF and Silverlight
speedy infliction of physical pain. (See my article, “Using WPF development transitions from pioneer to mainstream. „
for Good and Not Evil” at tinyurl.com/27anuy7, for details.) We darn
well better have learned something from examples like this. David S. Platt teaches Programming .NET at Harvard University Extension
Microsoft is the only entity that can gather the community School and at companies all over the world. He is the author of 11 programming
experiences, combine it with their own extensive data and books, including “Why Software Sucks” (Addison-Wesley Professional, 2006)
and “Introducing Microsoft .NET” (Microsoft Press, 2002). Microsoft named
promulgate it industry-wide. him a Software Legend in 2002. He wonders whether he should tape down two
A third cohort wrote: “Standards cramp innovation and are a of his daughter’s fingers so she learns how to count in octal. You can contact him
huge barrier to progress, the classic example being the QWERTY at rollthunder.com.

96 msdn magazine
SharePoint 2010
from the experts.
From beginner to
professional, we have
SharePoint 2010
covered.

Join us on facebook where you


can connect with Wrox and learn
more about the latest books on
SharePoint 2010!
facebook.com/wroxpress

Untitled-2 1 6/10/10 2:12 PM


Untitled-1 1 4/12/10 2:38 PM

You might also like