Wednesday 4 June 2014

Building Multi-Platform SharePoint Applications Presentation for SPS Calgary

Recently had the opportunity to present at the Calgary SharePoint Saturday on May 31st.  It was the first SPS in Calgary and it was an honor to be able to present.
Here is a link to the slide deck.
If you are looking to build a single SharePoint solution that can support traditional web parts, the new App Model, O365 and any mobile platform then the presentation is for you.

Monday 28 October 2013

SharePoint Output Cache

What is this SharePoint Output Cache and why should I use it?

In ASP.NET there is an output cache that manages how pages content is severed.  It allows IIS to cache static elements, such as images and pages so that subsequent requests do not need to go looking for these items.  Similar to how your browser cache keeps you from downloading the same images over and over again.  The big advantage with SharePoint is that when this is enabled it caches fully rendered versions of pages. 

When SharePoint loads a page it is a really big process.  It needs to get the Master Page from the file system or the database.  It needs to get the Page Layout from the file system or again the database.  It needs to get all CSS, all the images and all the JavaScript.  From here it can start rendering the page, but there is more.  For every security trimmed or audience trimmed control it needs to make a call back to the database to determine if it should render it or not.  BTW:  You read correctly it doesn’t make one giant call to figure out what to do with each security trimmed object, it calls them one at a time.  This can take a bit of time when you think about all the buttons in the ribbon that may or may not appear, on top of the web parts the page requires.

As you can see that is a lot of calls just to render one page.  With output caching enabled this fully rendered page will be cached and every subsequent call will not have to go through all the above process.  It will simply make one call and get one page back.  As you can image this makes a huge difference on throughput performance.

To demonstrate just how much of an improvement this makes, I load tested the a customized SharePoint 2013 site.  This site isn't a very heavy site, but does have a lot of JavaScript for responsive design on top of all the other SharePoint scripts that are required.  I ran the same set of tests, with same number of users against the same site and you see the results in the below chart:
Counter
Before
After
Average Page Load Time (s)
23.4
1.4
%Processor Time App Server
24.8
16.8
%Processor Time WFE Server
81.1
41.5

As you can see it made quite an improvement in Page Load Time and took a significant amount of stress off both servers.

Now you are probably wondering why this wouldn't be enabled by default, since it provides such significant performance gains.  Well there is a bit of a catch to this.  It is caching a version of the page, this means there is more memory being used on the server to store the page.  On a publishing site this is a bigger piece as it will cache two version of page (published and draft).  Also, as you've guessed this could lead to some inconsistencies as well.  The cached page is removed when the page is updated, but there is a bit of a delay and each WFE has its own timer on when it updates the page.  In a farm with load balanced multiple WFE servers it is possible that the first request could go to a WFE that has updated the page in cache and a second request goes to a WFE that has not.  The time to update is by default 60 seconds, so this is a small window, but still possible.

To address some of the issues is the ability to create caching rules.  You can target rules to a Site Collection, Site and Page Layouts.  You can also create caching rules based on a user’s access rights, so that all readers see cached versions, where contributors do not.  This rules can also be extended programmatically through the VaryByCustom handler.

Overall, in my opinion, Output Caching should always be leverage for low write, high read sites.  Also considering that you can have rules set for different page layouts, you could set the caching time higher for the a low write page, like maybe the home page of an intranet, and set the time lower on high write pages within the site.  With careful planning this feature can really help scale out the farm to handle more users while saving server resources.

For more information on output caching and how to enable it, see this helpful MS link: http://msdn.microsoft.com/en-us/library/aa661294(v=office.14).aspx

Monday 26 August 2013

Tweet from SharePoint

Sounds easy enough, but it turned out to be a little more challenging that I had originally anticipated.  Twitter recently updated their API and now requires authentication on every request, using OAuth.  It’s actually fairly simple once you know what it’s expecting, so I thought I’d share.

If you are using SharePoint 2013 there is a good library on CodePlex.  It’s called Tweetinvia and it’s a nice C# library that takes care of everything for you.  You can download it here and then proceed to the security sections below.

If you are still using SharePoint 2010 this library will not work for you, as it was compiled in .NET 4.0 and uses a lot of features that are only available in .NET 4 and higher.  Unfortunately I was in this boat, so I had to build my own .NET 3.5 library to do this. I created two projects: Twitter and TwitterTest.  The first one contains all the code required to post an update to Twitter.  The second is console application to test it with.  This project is not as robust as the Tweetinvia project.  I only required the ability to post, so that is all I built.  

Ok, so how did I do this and what was required.

The basics:

The Twitter API is basically a set of REST web services.  To make an update you must send a POST request to this address: https://api.twitter.com/1.1/statuses/update.json?status=[tweet]&trim_user=False.  Replace the [tweet] with your message and you are almost there.  Not bad so far.

As I mentioned above the Twitter API now requires authentication, which is provided in the HTTP authentication header.  Twitter is looking for 4 keys, the values are unique per Twitter application and are provided by Twitter (see below for more details):
  •  Consumer key
  • Consumer secret
  • Access token
  • Access token secret


So all you need to do is send a tweet to the API, using the URL above, and ensure you have the proper OAuth Authentication token from the above keys.

To do all this I used the Tweetinvia library as a map.  I ripped our required methods from the OAuthToken, OAuthWebRequest projects.  Ripped out the required Utilities, Enums and Interfaces and created my own Tweet class that wired them all together as required.  Fortunately for the most part the methods I needed did not have any dependencies on .NET 4.0 and the little bit that did was easy enough to rewrite in .NET 3.5.  

At a high level the OAuthToken class is responsible for OAuthToken and leveraging the OAuthWebRequest methods to send a proper request.  The OAuthWebRequest generates the web request with the proper authentication header.

Security, Getting the Key values from Twitter:

To do this you need to go https://dev.twitter.com, login as your twitter account (BTW: at imason you can use imasontest, Alice…).  Go to My applications.  To get there in the top right hand corner there is your profile picture (or the beautiful egg if you haven’t updated it), click on the drop down arrow and select my applications.  Create a new Application, fill out the fields, it doesn't seem to matter what the website or callback URLs are (disclaimer: it didn't for my project and I don’t actually know what they are used for…so it may for you).  Once created, on the settings tab there is a section called Application Type: make sure it’s set to Read and Write (assuming you want to POST) and click the update this Twitter applications settings.  Now on the OAuth Tool tab you will see the 4 keys and their values.  If you use these and try to POST to this account, it should work.

Security, the SharePoint Twist

So at this point, the console application will work…but when you try to call this from SharePoint it fails…of course it does, there is always a “helpful” SharePoint twist.  The missing piece of the puzzle is configuring SharePoint to trust the Twitter API URL.  To do this you need to get the Root CA Certificate that Twitter uses.  You can grab this directly from Verisign or you get it from Twitter, you are looking for the Class 3 Public Primary Certification Authority – G2.cer file.
Once you have this file you can configure SharePoint to trust it.  Open Central Admin.  Go to the Security Section.  Click on Manage Trust.  Click New.  Fill in a name, I used “Twitter”.  In Root Certificate for the trust relationship, click browse and locate the Class 3 Public Primary Certification Authority – G2.cer file.

With that the communication between SharePoint and Twitter should now work.  

Friday 19 April 2013

SharePoint Client Object Model


I have been doing a lot of work with the SharePoint Client Object Model (COM) and wanted to share some of my experiences.  The COM is going to be more important as we start building more mobile applications and think about building SharePoint apps.

First off, I did not start with the SharePoint COM; I thought I would be able to build my site using the SharePoint REST web services.  One of my requirements though was to be able to use Anonymous access and that just doesn’t seem to be possible with the REST web services in 2010.  Although, if anybody knows how to do this, I would love to know.  So I ended up with the COM as it supports both authenticated and anonymous users (assuming of course that your Site Collection has been properly setup to allow anonymous users).  Overall the COM and REST web services do pretty much the same thing, although the COM does have this neat feature of being able to put together multiple actions into a single request.  All and all I may have been forced into the correct method after all.

The basic usage of the COM is very similar to the Object Model.  For example here is the code to get all the list items from a list:
using (ClientContext sharePoint = new ClientContext(_strSPURL))
{

                Web web = sharePoint.Web;
                ListCollection spLists = web.Lists;
                List spList = spLists.GetByTitle("Pages");
                CamlQuery camlQuery = new CamlQuery();

                camlQuery.ViewXml = @"<View>
                                    <ViewFields>
                                        <FieldRef Name='ID'/>
                                        <FieldRef Name='AudienceTaxHTField0' />
                                        <FieldRef Name='ServiceTaxHTField0' />
                                        <FieldRef Name='Title' />
                                        <FieldRef Name='ContentTypeId' />
                                    </ViewFields>
                                    <Query>
                                        <OrderBy>
                                            <FieldRef Name='Modified' Ascending='False'/>
                                        </OrderBy>
                                    </Query>
                                </View>";

                Microsoft.SharePoint.Client.ListItemCollection spListItems = spList.GetItems(camlQuery);

                sharePoint.Load(spListItems, items => items.Include(
                                                        item => item["ID"],
                                                        item => item["AudienceTaxHTField0"],
                                                        item => item["ServiceTaxHTField0"],
                                                        item => item["Title"],
                                                        item => item["ContentTypeId"]));

                sharePoint.ExecuteQuery();

                return spListItems;

}

As you can see, it is pretty much the same as you do now.  The ClientContext is like opening a SPSite object.  After that we open a SPWeb object, get the list, run a CAML query and return the items.  The other thing you may notice is that I explicitly tell SharePoint which fields to return.  This ensures SharePoint only returns the data I’m interested, the smallest package possible; as a side note you should also be doing this when using the Object Model…I mean have you seen all the crap SharePoint returns if you don’t do this?

Some gotcha’s
You can only use this against a Site Collection.  You have no access to anything higher.

The biggest gotchas I ran into where with the Taxonomy fields.  In the 2010 COM there are no Taxonomy Field objects (they are in the 2013 COM).  This means when trying to get data from Taxonomy fields you will have to reference the hidden Taxonomy field that is associated to the field you are interested.  You’ll noticed in the above example I’m referencing AudienceTaxHTField0 and not Audience which is the field you would see when using the site.  The other piece is the value coming back from this field will be the raw form, meaning that you will need to massage it a bit for it to be useful, unless of course you love GUIDs.

The other thing to keep in mind is performance.  Since your application will be making web service calls to SharePoint, using the COM, each request will have a minimum round trip time.  When coding try to limit the amount of round trips as much as possible and if possible think of using asynchronous methods so that the page doesn’t have to wait for all the data before it starts rendering.  Remember, it will be quicker to get a bunch of data and massage it in your app/site rather then make several little calls…to a point as if you ask for too much data this will also have a negative impact on the round trip time…you’ll need to find a balance.


References:

SharePoint Project Planning


Congratulate yourself for actually accepting planning is required with SharePoint.  I know Microsoft likes to tell us you can just turn it on and use it, but we all know that’s just not the case.  Too many places are stuck with the pilot that never ends.  Starting with pilot or POC is great approach, but for it to work you need to circle back and actually plan a proper deployment that includes DR, Availability and Performance.  The Pilot that never dies had none of these consideration and is like a ticking time bomb, who will be the first user that picks up the phone to complain the site is unusable?  Will it be the CEO?  More importantly will you be able to fix it?

To begin with make sure you have the correct people for the job.  If you are going to train up internal resources to do the job, ensure the training happens early, ideally before any solution planning.  This is because the team planning the SharePoint solution will need to lead the end users to the solution.  It's hard for the end user to ask for what they want, when they don't know what is available.  Ask any SharePoint professional and there are many tools, tips and quirks to using SharePoint that only someone with experience would think of.  If no one knows how SharePoint works, it becomes the blind leading the blind.  Which is why, even if you are planning to train up your internal resources, you should ensure you have an experienced SharePoint professional on the project.  As we all know, there are something's that only experience can teach us.

Once you have armed your team with this knowledge, the end users are going to ask for things that are known limitations of SharePoint.  While it is always a great strategy to use OOTB functionality, don't be too rigid on this.  Rather than just saying no, or agreeing to a complex custom solution, be prepared with work arounds or alternatives that may speak more to the spirit of the request.  After all, if the site is usable to the end user, who is going to use it.  But if it's overly customized who is going to support it?

Now that you have enough information to design a solution that will meet the end users' needs don’t' forget to consider the following in your design: High Availability, Disaster Recovery, Automated deployment to the multiple environments (UAT, Staging, Production), Security, Internal vs. External access, mobile (becoming bigger by the day), performance and of course monitoring. 

Be agile in development, especially with people that don't have much SharePoint exposure or experience.  Again, when people don't know SharePoint, they can't know what to ask for.  By giving the users prototypes or sneak previews into the solution you can identify gaps early and have a much better chance of delivering a solution the end user will actually use.

Test, test and test; be sure testers also have SharePoint training.  Otherwise how will they know what to test and how to test OOTB features?  During this testing be sure to test in an environment that simulates production.  If you're going to have a highly available environment with multiple Web Front End Servers, or you're using SSL, be sure to test these conditions.  I've seen many issues arise from these that are not reproducible in a single WFE, non SSL setup.  In addition ensure you test your nonfunctional features like Performance, High Availability, Disaster Recovery and monitoring.  After all if you don't test them, you don’t' have them.

Now that you have it in Production, the job isn't over.  We've simply moved to the next step in the SharePoint lifecycle: Build, monitor, review and improve.  After all the end users job is always evolving, your site will need to keep up.

Thursday 21 March 2013

Machine Translation Service


Overview

The new machine translation service in SharePoint 2013 is an attempt to finally make the variations feature useful.
The feature is intended to translate published content (documents, list items, entire sites) into other languages.  It is an extension of variations, that either synchronously or asynchronously sends the content to a translation service in the cloud.  By default it is configured to use a Microsoft cloud hosted translation service, but can be configured to use other third party services.  The key to this is that it uses a cloud hosted service to take the load of your internal infrastructure.
Besides the cloud based translation, this service application offers a few other improvements to the variations feature:

  • Variations now supports up to 209 variation labels for on prem deployments and 50 variation labels in the cloud
  •  No longer required to unpublish/publish documents to get changes to sync in related resources.  List/Libraries now sync independently
  •  Localized sites now use XML Localization Interchange File Format (XLIFF).  This is the standard and makes it easier to use third party translation services to translate your app

Architecture

The architecture of this service is very similar to that of the Word Automation service; they both have similar components, for example: Timer Jobs and Document queues.  If you are familiar with the Word Automation service object model you should have very little problems using the machine translation object model.  Here is a great technet overview.
Asynchronous translation requests are handled through the document queue and a timer job.  By default the timer job is set to run every 15 minutes, but this configurable.
Like all other service applications, it can be configured through either PowerShell or the SharePoint Central Admin…but seriously, who uses Central Admin?

Prerequists

The following pieces are required to use the Machine Translation Service:

  •  SharePoint 2013 Standard or Enterprise
  • App Management Service is started
  • Server – to – Server app authentication is configured
  • User Profile Service Application Proxy in default group
  • User Profile Service provision and configured
  • Internet connection

API

The machine translation service provides the following APIs:

  • Server Object Model
  • Client Object Model
  • REST Web Services
Through all three methods you are able to translate a single file or all items in a list, library or folder either synchronously or asynchronously.  You are also able to translate a file stream (must be synchronous), for on the fly translation of uploaded content.

Caution

This new service application is a very powerful new feature for SharePoint, but like anything else comes with its own set of risks.  The two main ones that jump out at me are: Security and Reliability.  Both are fairly obvious.  On the security front, you are now sending your content into the cloud.  This puts your content at risk of being seen by people both inside and outside your organization that may not have permissions to the content.  Generally this feature will be used for public sites which require multiple language support.  This mitigates the risk somewhat as the content was always intended for public consumption.  Of course this leads into the second risk, reliability.  Remember this is a machine doing the translation, not a human.  It is basically taking each work in the source document and turning it into the equivalent word in the destination language.  This may not end up conveying the message you are looking for.  Anyone who has traveled to non-English speaking countries have seen their fair share of engrish.  You probably want someone to proof reading the translations to ensure your website doesn’t become popular for the wrong reasons.

Wednesday 6 March 2013

SharePoint 2013 Service Applications


What’s the same?

Architecture

Over all this is the same architecture as SharePoint 2010:
  • Proxy Group (Groups of Service Applications consumed by a Web Application)
    • Service Application Proxy (Proxy between the Group and the Application)
      • Service Application (Search, User Profile, etc)
        • Service Application Instance (may be multiple instances of certain service applications)
          • Database (if needed)
This architecture is intended for multitenant (think cloud hosted) farms.  It allows you to have a central set of service applications and share them, as required, to all the site collections within the farm and even to site collections in different SharePoint farms.  The inter-farm sharing is a very interesting concept for the larger enterprise clients, along with companies hosting SharePoint.  When dealing with a large group of users these service applications can become very resource intensive.  One way to plan for this is to have a farm that is dedicated to the services.  This allows you to have smaller farm(s) that are only required the resources needed to render the SharePoint sites, but still have a common set of search results or user profiles for example.

Management

You have the same two choices: either through the Central Admin GUI or through PowerShell.  Although if you really consider yourself a SharePoint administrator, then your only option is through PowerShell.  All jokes aside, avoid setting up Service Applications through the GUI.  The SharePoint GUI makes lots of bad decisions when creating your service applications, for example: using GUIDs when creating the databases, using the server name when creating web applications for the service applications.

What’s Changed

Federation

This is what allows you to share service applications across farms both locally and remotely.  Although this hasn't really changed in SharePoint 2013, the service applications that can shared across farms has.  Here is a list of the service applications that can shared across farms:
  • BCS 
  • Managed Metadata 
  • Search 
  • Secure Store 
  • Machine Translation Services 
  • User Profile

In addition the remote farm no longer requires permissions to the parent database.

New Service Applications

Here is a list of the new Service Applications for SharePoint 2013
  • Access Services: Create, deploy and manage collaborative web-based Access applications. This can also be used when developing SharePoint Apps 
  • App Management Service: For the Market Place 
  • Machine Translation Services: Cloud based translation service for documents, pages and sites. Has been built to be extendable, has the ability to use third party translators 
  • Work Management Services: Puts all your outstanding tasks in My Tasks. Has two way sync with Project Server, Exchange and plugins for other systems (MS is very vague on what that means). Even has the ability to remind you of tasks on your mobile device. You’ll never be safe again

Improved Service Applications

These service applications have gotten even better in SharePoint 2013
  • Managed Metadata: Improved Managed Metadata navigation 
  • Search Service: FAST and Web Analytics have been rolled into search 
  • Subscription Settings Service: Now handles app management 
  • User Profile Service: Added back in a 2007 style sync (strictly read only and faster), ability to import additional properties from BCS

Removed Service Applications

These service applications did not make the cut in SharePoint 2013
  
  • Web Analytics: Rolled into Search Service 
  • Office Web Apps: Now its own product, but available for externally facing (internet) SharePoint sites