Thursday, 10 May 2012

SharePoint 2010 Console App returns ErrorWebParts

On a recent upgrade project I needed to update the properties of a Content Query Web Part. Sounds easy enough, just need to whip a quick console app and away we go.

Coding went very quickly, but when I ran the app I couldn't get any Content Query Web Parts, they were all returned as"ErrorWebParts".

I consulted my good friend google and found this blog post [1]. This guy was having the same issue as me and discovered that this web part references a SPContext object, unfortunately from a console app there is no web-context.

After reading this, I modified my code to set the HTTPContext.Current and like magic, I was able to edit the properties of all the CQWP.

SharePoint Content Query Web Part not returning all results

On a recent SharePoint 2010 migration project we noticed that when we created new items (pages in the pages library), the Content Query Web Parts did not immediately display the results.

This seemed very odd at first, as I had never noticed a delay before with these web parts, so my first thought was a migration issue. But why would a migration only cause the web part to work slower.

So with the help of my good friend Google, I was able to find a blog post [1] that offered the following:CQWP ignores the following items:
  1. Items are check-out
  2. Items are not published and not-approved

But I was hitting publish on the page, so my items would not have fallen into either of these categories...unless it was just a delay in SharePoint marking the item as published.

Following the advice in the same blog post, I wrote a console app that targeted all the CQWP and changed the "UseCache" property to false.

Just like magic the items I created are now showing up immediately in their respective CQWPs.

Keep in mind this may not be the best solution for a site with a high load or CQWPs that are running large queries. You'll need to weigh the benefits of performance against immediate results. In most cases users will understand if there is a small delay as long as they have an understanding of the situation.


Monday, 7 May 2012

Real Time Search Results with SP Change Log

Search results are only as up to date as the index.

In most cases this small limitation is not an issue. No one really cares that there posted document or blog post will not show up in search results until the next crawl.

From a performance point of view this is a very small price to pay, compared to looping through the entire site with the SharePoint API to find the handful of documents that may not be in the index.
But what if the majority of your screens are search drive? What if the people using the site needed to see the documents the minute they are posted?

Earlier in the year we ran into that exact problem on one of our projects. Many users found it confusing when they would post a new item, but it would take at least 5 minutes before their item would show up on any screens. 

Very unintuitive. How do you know everything went well?

To solve this issue we looked into the SharePoint Change Log. The SharePoint Change Log was created to be able to target which items need to be added to the index. In SharePoint 2003, every incremental crawl needed almost a full crawl just to see what had changed. In MOSS and higher every change is recorded in the Change Log, the crawl then uses this log to add/remove items from the index.

As you can imagine this is a very powerful tool. We can use the Search results to get the vast majority of results, then using the Change Log we target exactly what items need to be add/removed from our screens.
Just like that, Real Time Search Results from SharePoint.

Migrating SharePoint MySites from 2007 to 2010

The MySites, for me anyways, are like the forgotten child of SharePoint. When migrating I put a lot of thought into how to move their existing site collections and solutions. What do they currently have setup in the SSP and how are we going to recreate them in the Service Application architecture of 2010. But I never put a lot of thought into how are we going to move the MySites?

On the surface this doesn't sound like a big deal. MySites are site collections like any other, simply built off a different template. Ideally they are hosted on their own Web Application and have their own separate content database(s); so what's the big deal?

This view isn't that far off, but there are a couple gotcha's that you need to consider before making the move:
  1. No Visual Upgrade option for MySites (or at least I couldn't find one). This means you will be forced to the 2010 Master Page.
  2. All Profile Properties are stored in the SSP database, not the MySite Database. Any custom properties, or any customized values will not be migrated over with just the MySites Content Database.
  3. Audiences are stored in the SSP. Web Parts referencing audiences will be referencing the GUID of the Audience, not the title. This makes it very difficult to just recreate your Audiences on the new environment.
The solution to the last two issues is to migrate over the SSP database as the profile database of the UserProfile Service Application in the SharePoint 2010 environment.

To do this you can use this PowerShell cmdlet to create the new user profile Service:
$ProfileGUID = New-SPProfileServiceApplication –Name [User Profile Name] –ApplicationPool [App Pool Name] –ProfileDBName [SSP Database] [1]

The above command will create a User Profile Service Application and upgrade the SSP Database. You can review the upgrade by looking at the Upgrade Status page. Next you'll need to run this command:
New-SPProfileApplicationProxy –Name [User Profile Proxy] –ServiceApplication $ProfileGUID [2]

Next up, the multi value User Properties do not come through right away. This, again, is due the new Service Application structure. The multi values were stored in the SSP in MOSS, but are now stored in the Managed Metadata Service in SharePoint 2010. Fortunately there is a handy PowerShell cmdlet that help map the Properties to the Managed Metadata Service:
Move-SPProfileManagedMetadataProperty -Identity [Profile Property] -ProfileApplicationProxy [GUID of User Profile Application Proxy] -AvailableForTagging -TermSetName [Term Set Name] [3]
I ran the above command against these properties:

Unfortunately, this command no longer works after you install the July CU or later [4]. If possible run this command after SP1, but before the CU.

Finally, I hope, there is the matter of the picture dimensions. In SharePoint 2010 they have changed things and now use three different sized pictures for the different areas of SharePoint. [5] When a user upload a picture SharePoint automatically creates these 3 different sized pictures. But what do we do about the pictures that already exist due to our migration? Once again it's powershell to the rescue. You'll need to run this cmdlet:
Update-SPProfilePhotoStore -MySiteHostLocation [URL to MySite Host] [6]

Now you should be able to open up this newly created User Profile Service Application. You should see that all your audiences, custom user profile properties and all the user profiles are all there.
To finish off this off, you will need to run a couple configuration steps:
  1. Start the User Profile Synchronization Service
  2. Re-Map any custom user profile properties to their AD property
  3. Configure Synchronization Connection and any filters
  4. Setup MySites
  5. Start Full Profile Synchronization
Now your upgraded MySites should be ready to use.


Finally, High Availability for SharePoint 2010…

Disclaimer: I am not a database expert, nor have I had the chance to personally use SQL Server 2012

High availability has always been a bit of a pain for SharePoint. It sounds easy enough. For the presentation layer just add more Web Front End Servers. For the Application layer just add more application servers and run the Service Applications on multiple servers. But what about the database?

In the past the main options here were either clustering, mirroring or log shipping. Each one of these options was good at something's, but they all had their share of limitations. For instance clustering was great on the server side, but all the servers in the cluster shared the same storage; mirroring provided separate storage, but you could only have one mirror; log shipping provided multiple replicas and separate storage, but is a very manual process. Then to top it off both log shipping and mirroring do not allow any access to the mirror or replica databases. As you can see none of these options make for a great solution.

Recently Microsoft released SQL Server 2012. As you expect this contains some great new features. Here is a link to a MSDN blog with the top 10 new features for SharePoint:

As you can see there are some neat things SharePoint can now take advantage of. The one I want to focus on is Always On.

Always On now promises to provide us with a true High Availability option for the data tier. It appears that Always On has combined the best of Clustering with the best of Mirroring. Always On allows you to setup a single DNS entry into a cluster of multiple SQL Servers, but it allows for each SQL Server node to have its own storage. It also has another nifty feature of being able to create Availability Groups. Availability Groups allow you to logically group databases that fail over together. Allowing you set different levels of high availability for different SharePoint farms within the same Always On cluster. This is huge from a licensing point of view as it allows you even more flexibility within shared SQL Servers, while still being able to provide the level of redundancy required by the business. Finally as an added bonus the fail over nodes are left in a read only mode. This allows us to report off the fail over nodes rather than going against live node when required.

What I'm still uncertain about is how this all works with the Service Application databases within SharePoint 2010. Many of the SharePoint 2010 Service Application database do not support either Log Shipping or Mirroring <Shameless Plug>To find out which Service Application databases are supported by either Log Shipping or Mirroring, see the SharePoint Reference Architecture document </Shameless Plug>. My hunch is that since they put a lot of time in making this feel like a clustering solution (that doesn't require shared storage) is that all the Service Application databases would all be supported within Always On. But as always we will need someone to test it first before we can know for sure.

In any event, if you are designing a High Availability solution consider using SQL Server 2012 and let us all know how it goes.