SHAREPOINT BUILDING BLOCKS Unofficial blog by Benjamin Athawes Mon, 07 Mar 2016 09:29:51 +0000 en-US hourly 1 https://wordpress.org/?v=4.4.21 Solving site and document follow issues in SharePoint 2013 caused by security updates /2016/01/15/solving-site-and-document-follow-issues-in-sharepoint-2013-caused-by-security-updates/ /2016/01/15/solving-site-and-document-follow-issues-in-sharepoint-2013-caused-by-security-updates/#comments Fri, 15 Jan 2016 01:26:55 +0000 /?p=549
Although I’ve performed my own testing to support the content of this blog post, all software updates should be regression tested in your specific environment before being deployed to production. No two farms are exactly alike!

What’s the issue?

Last year, a client of ours reported that they were unable to follow sites in SharePoint 2013 following the installation of an August 2015 security update for Word Automation Services (KB3054858). The error message displayed was simply “Sorry, we couldn’t follow the site”. As it turns out, this regression also breaks SharePoint’s document follow functionality. This blog post identifies the security updates (plural) that cause this problem, and explains the options that are available to either avoid or resolve it.

If your farm is suffering from this problem, here are the error messages that you will see when attempting to follow SharePoint 2013 sites and documents:



Has this been “officially” recognised by Microsoft as a regression?

No. However, the testing that I’ve carried out so far has convinced me that the public updates listed below are responsible for the problem, and that it is not environment-specific. Additionally, it looks as though at various times since August 2015, when the regression was first shipped in KB3054858. 

Given that deploying these updates can cause a SharePoint Server 2013 farm to “return to a former or less developed state” (the Oxford Dictionaries definition), I’m going to describe this issue as a “regression”, albeit an unofficial one.

Does this affect me?

This post is aimed at people that look after SharePoint 2013 farms that – for whatever reason – are a little behind in terms of SharePoint updates. If you’ve already deployed the October 2015 Public Update (), or are on the August 2015 Cumulative Update () or later then you shouldn’t be affected. You may of course be affected by other regressions, particularly if you have opted to install any recent cumulative updates. The August 2015 CU, for example, contains two known regressions that Todd Klindt reminds us about on his ever-useful SharePoint That particular CU is also particularly troublesome to install, sometimes requiring

If you aren’t sure about the distinction between the different types of SharePoint update that I’ve mentioned above (I think you should be able to tell cumulative and public updates apart, for example), I’d recommend reading by Stefan Goßner, a Senior Escalation Engineer at Microsoft. Since public updates usually include SharePoint security fixes, I use the terms “security update” and “public update” (PU) interchangeably for the remainder of this article.

Are we definitely talking about the same issue?

If you think that your farm might be suffering from the site and document follow issue that I’ve described, here is the ULS error that you should see when attempting to follow a site:

Original error: System.MissingMethodException: Method not found: ‘System.String Microsoft.Office.Server.UserProfiles.UserProfile.get_FollowPersonalSiteUrl()‘.

at Microsoft.Office.Server.UserProfiles.UserProfileServerStub.GetProperty(Object target, String propName, ProxyContext proxyContext)

at Microsoft.SharePoint.Client.ServerStub.GetPropertyWithMonitoredScope(Object target, String propertyName, ProxyContext proxyContext)

I’ve highlighted the get_FollowPersonalSiteUrl() method because we’ll be revisiting that shortly.

ULS get_FollowPersonalSiteUrl() error

Microsoft’s investigation

Our client uses SharePoint’s native social functionality extensively, so we decided to escalate the site and document follow issue to Microsoft in an effort to speed up the resolution. Microsoft Support provided the following statements and recommendations:

  • The problem relates to a dependency between the Microsoft.Office.Server.UserProfiles.dll and Microsoft.Office.Server.UserProfiles.ServerStub.dll assemblies, which manifests itself when KB3054858 (released August 11, 2015) is installed without the August 2015 CU or later.
  • Although installing the August 2015 CU or later will resolve the issue, Microsoft recommended that we deploy the November 2015 CU (). The precise reasons for that recommendation have not been disclosed to us yet.

Having considered the known regression that the November 2015 CU contains (it ), our client decided to proceed with Microsoft’s suggested course of action. Installing the November 2015 CU *does* resolve the site and document follow issue. But that’s not the whole story.

My follow-up

Since deploying this fix, I’ve had some time to perform my own testing to help determine which specific updates contain the site and document follow regression. My intention was not to second-guess Microsoft’s recommendation, but I did want to clearly understand the root cause of the problem in order to ensure that I can provide folks with the right advice. In doing so, I’ve concluded that at least four separate security updates released between August and November 2015 can cause the regression, and that deploying the November 2015 CU is *not* the only way to fix it. Feel free to skip the end of this post if you simply want to see the list of affected updates. If you’d like to see the “evidence” that backs my assertions, read on.

I started my testing by configuring a local single-server lab environment in an attempt to re-create the issue (I figured that even my mediocre PowerShell skills could manage that). I installed SharePoint Server 2013 with Service Pack 1, then installed all security updates for SharePoint up to and including the August 2015 PU (KB3054858). The site and document follow issues described by my client immediately reared their ugly head. Keep in mind that my lab is sat on my home machine, and is completely isolated from the client’s infrastructure.

I decided to fire up .NET Reflector and dig a little deeper. Having analysed dependencies within Microsoft.Office.Server.UserProfiles.ServerStub.dll (the file mentioned in the ULS entry), it appeared clear that the version of this assembly that ships with KB3054858 relies on a method that was absent in my lab farm’s version of Microsoft.Office.Server.UserProfiles.dll:

Missing get_FollowPersonalSiteUrl() method

In contrast, the get_FollowPersonalSiteUrl() method was alive and kicking after I upgraded my lab farm to the August 2015 CU, and I was once again able to follow sites and documents. All this is expected behaviour based on the Microsoft Support statements included earlier.


I was now keen to understand which SharePoint 2013 updates included the two DLLs in question. Through liberal usage of Hyper-V checkpoints and reflector, I found that:

  • Microsoft.Office.Server.UserProfiles.ServerStub.dll hasn’t changed since August 2015, and ALL SharePoint Server 2013 updates (cumulative and public) include it
  • The “missing” get_FollowPersonalSiteUrl() method was added to Microsoft.Office.Server.UserProfiles.dll in the August 2015 CU
  • All *cumulative* updates released since August 2015 contain Microsoft.Office.Server.UserProfiles.dll and therefore the “missing” method
  • However, the October 2015 PU is the ONLY *public* update released since August 2015 that contains Microsoft.Office.Server.UserProfiles.dll and therefore the “missing” method

To help clarify the version history of these assemblies, I’ve pulled together a list of all cumulative and public updates that have been released for SharePoint Server 2013 since August 2015. Note that I have excluded SharePoint Foundation 2013 updates, as those do not appear to include the two assemblies in question (most likely because the User Profile Service Application doesn’t ship with the Foundation SKU):

SharePoint Server 2013 updates released since August that include the Microsoft.Office.Server.UserProfiles.dll assembly

Public Update that includes Microsoft.Office.Server.UserProfiles.dll
  Cumulative Update that includes Microsoft.Office.Server.UserProfiles.dll
  Update that does NOT include Microsoft.Office.Server.UserProfiles.dll
KB Type Release Date UserProfiles.dll version UserProfiles.ServerStub.dll
version
KB3054858 PU August 11, 2015 15.0.4745.1000
KB3055009 CU August 11, 2015 15.0.4745.1000 15.0.4745.1000
KB3054813 PU September 8, 2015 15.0.4745.1000
KB2986213 CU September 17, 2015 15.0.4749.1000 15.0.4745.1000
PU October 13, 2015 15.0.4757.1000
 
15.0.4745.1000
KB3085492 CU October 13, 2015 15.0.4757.1000 15.0.4745.1000
KB3085477
(replaces KB3054858)
PU November 10, 2015 15.0.4745.1000
KB3101364 PU November 10, 2015 15.0.4745.1000
KB3101373 CU November 10, 2015 15.0.4771.1000 15.0.4745.1000
KB3114345 CU December 8, 2015 15.0.4779.1000 15.0.4745.1000
KB3114497 CU January 12, 2016 15.0.4787.1000 15.0.4745.1000

Note that this list my not be exhaustive – the security updates were mostly identified by reviewing the list available within Windows Update. Please let me know if I’ve missed off a SharePoint Server 2013 update that shipped between August 2015 and January 2016.

Having identified that the October 2015 PU is the only public update released since August 2015 that contains Microsoft.Office.Server.UserProfiles.dll, I was keen to understand whether that update alone would “fix” the site and document follow regression without having to install a cumulative update. Keep in mind that that should only be installed if they resolve specific problems, whereas security updates should be tested and deployed as soon as possible.

I rolled back my lab environment to its original “regressed” state (SharePoint Server 2013 with Service Pack 1 + all security updates up to and including the August PU), and confirmed that I got the “Sorry, we couldn’t follow the site” error. Once again using Hyper-V checkpoints, I ran through a number of different scenarios to help confirm that the October 2015 PU irons out the site and document follow regression:

  1. I installed the October PU (KB3085567) alone, with no other additional updates. Site and document follow functionality was fixed as I had anticipated.
  2. I rolled back to the August PU (KB3054858), and installed ALL outstanding SharePoint 2013 security updates up to and including January 2016. Given that the October PU was included, site and document follow functionality was fixed.
  3. I once again rolled back to the August PU, and installed all outstanding SharePoint 2013 security updates up to and including January 2016 EXCEPT the October 2015 PU. This time, site and document follow functionality remained broken.

Although clearly not exhaustive, these tests give me a level of confidence that installing the October 2015 PU is one (perhaps the only) way of fixing – or avoiding – the regression described here short of deploying a cumulative update. With this information in-hand, my default approach to resolving this problem is to simply test and install all outstanding security updates for SharePoint as a first port of call.

In contrast – with limited time available to thoroughly investigate the root cause – we followed Microsoft’s recommendation to install the November 2015 CU for our client due to a pressing need to restore SharePoint’s follow functionality. I now plan to point Microsoft Support at this post in order to help understand whether the October PU would also have been a viable option, and will post an update if I receive any further clarification.

If you decide to go ahead with the , it should be available for download via Windows Update if you’ve opted in to receive non-OS updates. Security updates for SharePoint 2013 sit within the “Office 2013” category and look like this if you happen to be installing them via Windows Update (remember to opt in to non-OS updates):


One should of course be testing and installing all security updates, but this specific PU resolves the follow regression described in this post. Remember to test all this in your environment first, and please stop by to let me know how you get on!

Q&A

Should I just install all available security updates rather than the specific update that you’ve mentioned?

Yes, I suggest you test and deploy all security updates. Make sure, however, that KB3085567 is included if you’ve run into the site and document follow issue.

Which updates can cause the site and document follow regression?

The security updates highlighted in red in the table will cause the site and document follow issue described here *if* they are installed without the October 2015 PU, or the August 2015 CU or later.

So do I need to install any cumulative updates?

As far as I can tell, no CUs are required to fix the site and document follow issue.

Why would anyone run into this problem now, given that one can simply “install all security updates” to avoid it?

I don’t expect that many farms will suffer from this regression given that the October 2015 PU includes the goodies required to avoid it. However, considering that patching SharePoint can be very time consuming, I anticipate some folks might run into this if they are behind on patching and need to deploy a subset of the outstanding security updates for SharePoint (perhaps to minimise an outage window).

Should I run PSConfig after installing security updates?

Yes – see

]]>
/2016/01/15/solving-site-and-document-follow-issues-in-sharepoint-2013-caused-by-security-updates/feed/ 2
The SharePoint Cloud Search Service Application – initial thoughts /2015/06/15/the-sharepoint-cloud-search-service-application-initial-thoughts/ Mon, 15 Jun 2015 10:07:19 +0000 /?p=538 In May, I was lucky enough to attend Microsoft’s Ignite 2015 conference in Chicago along with a handful of other Content and Code colleagues. A stand-out session for me unveiled the forthcoming SharePoint Cloud Search Service Application, which – among other enhancements – will finally deliver a consolidated on-premises and cloud Search Index, that lives in SharePoint Online. The news that this thing will be available for both SharePoint Server 2013 and 2016 was particularly interesting.

You can read more over on the Content and Code blog, in a post titled .

]]>
Introduction to Basic and HA SharePoint Server Farms in Microsoft Azure IaaS /2014/07/20/introduction-to-basic-and-h-sharepoint-server-farms-in-microsoft-azure-iaas/ /2014/07/20/introduction-to-basic-and-h-sharepoint-server-farms-in-microsoft-azure-iaas/#comments Sun, 20 Jul 2014 15:18:41 +0000 /?p=500
Too long, didn’t read (TLDR) summary

  • The Azure SharePoint Server Farm application template appears to be targeted at development and testing scenarios.
  • You get two topology options: a “basic” farm (3 VMs, no HA) and a “high-availability” farm (9 VMs). The HA option costs about twice as much per month.
  • It cost me about £10 to “spin-up”, then de-allocate an Azure SharePoint Server Farm, but your mileage may vary.
  • I’ve uploaded a SPSFarmReport of a vanilla “high-availability” Azure SharePoint Farm for you to peruse at your leisure.

On 9th July 2014, Microsoft published an article titled . Amongst other announcements, that article introduced the idea of templates within Azure Infrastructure as a Service () for multi-machine/tier applications such as SharePoint:

“Create, deploy, monitor and manage rich virtual machines’ based applications, and manage virtual networks within a fully customizable Portal experience. In addition to creating simple virtual machines, we are adding the ability to automate the deployment of rich multi-machine application templates with a few clicks. With this, deploying a multi-tier, highly-available SharePoint farm from the portal will be a few clicks away!”

Sure enough, a quick trip over to the confirmed that this functionality is available within the gallery (for me at least):


In this blog, I briefly note down my thoughts on how this offering has been positioned, then go on to discuss what you get, and some of the main assumptions that Microsoft have made when putting these templates together. Note that I have no “inside” information – everything here is inferred from the Azure Preview Portal, and inspection of the VMs that are provisioned when creating an Azure “SharePoint Server Farm”.

When might we deploy an Azure “SharePoint Server Farm”?

Looking at the screenshot above of the Azure Preview Portal, it isn’t obvious whether the Azure SharePoint Server Farm is intended for development, testing, production or all of the above. The article is clearer, as it differentiates between a “basic” farm (three VMs, no HA) and a “high-availability” farm (nine VMs with HA), and briefly notes their intended purpose (emphasis added):

  • “You can use this [basic] farm configuration for a simplified setup for SharePoint app development or your first-time evaluation of SharePoint 2013.”
  • “You can use this [high-availability] farm configuration to test higher client loads, high-availability of the external SharePoint site, and SQL Server AlwaysOn for a SharePoint farm. You can also use this configuration for SharePoint app development in a highly available environment.”

As you can see, it appears that an Azure SharePoint Server Farm is intended for development, test and evaluation purposes. There is no mention of production workloads, and I speak to some of the possible reasons for that below.

What do I get?

By clicking the “Create” button in the Azure Preview Portal, you will either create a “basic” or “high-availability” SharePoint Server 2013 farm. The topologies of those farms are shown below:

“Basic” Azure SharePoint Server Farm (3 VMs, no high-availability), from the


“High-availability” Azure SharePoint Server Farm (9 VMs, including a SQL Server 2014 AlwaysOn availability group), from the


Clearly there are a ton of configuration options within each VM that are not spoken to above. Here are some of the key design choices that I noted whilst perusing my Azure SharePoint Server farm:

  • A new forest and root domain are created along with your Azure SharePoint Server farm. If you already have existing AD DS infrastructure in Azure IaaS, there does not appear to be a way of installing SharePoint within that infrastructure.
  • A SQL Server 2014 AlwaysOn availability group is created automatically. This requires SQL Server 2014 Enterprise Edition, which isn’t cheap (as reflected in the VM costs shown below).
  • The following choices were made regarding SharePoint Server 2013:
    • SharePoint Server 2013 Service Pack 1 is installed (build 15.0.4569.1000).
    • A single content-serving Web Application is created, with a single root path-based Site Collection. This does not align with for new SharePoint 2013 environments.
      • Interestingly, port 80 is open within the Windows Firewall, exposing this Web Application to the Internet. We would typically expose SharePoint to the Internet using a reverse proxy server such as the , and ensure that all Web Applications are SSL-secured for security reasons.
    • No Service Applications are provisioned aside from those that are created automatically when creating a new farm (the Security Token Service and Application Topology Service).
    • Only the Setup and Farm Accounts are provisioned. In production, it is unlikely that those accounts would be sufficient, assuming that Microsoft’s are followed.
    • All SharePoint VMs host an instance of the Distributed Cache Service. Some Microsoft staff (including ) recommend dedicated Distributed Cache servers for performance and stability reasons.
  • The default pricing tier/specifications SQL Server and SharePoint VMs do not meet Microsoft’s minimum . For example, Web and Application servers require 12 GB RAM and 4 CPU cores per server, and the default pricing tier selected for those VMs (A2 Standard) provides 3.5 GB RAM and 2 cores. I expect the default specifications for Azure SharePoint Server Farm VMs to be insufficient per SharePoint 2013 Service Application resource requirements, even if those VMs are intended for development or testing purposes.

These design points underline the idea that an Azure SharePoint Server Farm is a starting point for development and testing. We still need to apply additional effort to get these guys into a state that is ready for anything but the most basic SharePoint development. Today, that effort would most likely take the form of applying a PowerShell script to automate “remaining” Service Application, Web Application and systems configuration in order to produce a farm that is aligned with the production environment(s) that it supports.

It’s worth noting that if an Azure SharePoint Server Farm were intended for production usage, the act of creating it via the Azure Preview Portal does not remove the need to . Once we arrive at a design, it is likely that we would choose the “high-availability” option for production as a starting point, then add or remove VMs to meet our requirements. Identity integration would be a key design consideration given that “Azure SharePoint Server Farms” come with a dedicated Active Directory Forest (essentially a ). Taking all of this into account, I question how much time we would save by using the Azure SharePoint Server Farm template in production, and can see why the feature is marketed as a development/test capability.

How much is it?

It’s always challenging to talk about pricing in a blog post, as Microsoft licensing agreements differ from customer to customer. What I will do is put together a quick “back of the napkin” price list so that you can see the relative cost of the “basic” and “high-availability” Azure SharePoint Server Farm options. Note that these are list prices, and I only list the default pricing tier costs mentioned on the Azure Preview Portal. Additional licensing costs (such as those required for SharePoint) are likely to apply, and an MSDN subscription may make this more affordable, as . I’m no licensing expert, so please check with your licensing reseller before committing to anything.

List prices for “basic” Azure SharePoint Server Farm (default pricing tiers on pay-as-you-go) on July 20th, 2014

VM role Quantity Default pricing tier Monthly cost
Domain Controller 1 A1 Standard 42.61
SQL Server 1 A5 Standard 2130.67
SharePoint 1 A2 Standard 85.23
      £ 2258.51

List prices for “high-availability” Azure SharePoint Server Farm (default pricing tiers on pay-as-you-go) on July 20th, 2014

VM role Quantity Default pricing tier Monthly cost
Domain Controller 2 A1 Standard 85.22
SQL Server 2 A5 Standard 4261.34
SQL Server File Share Witness (FSW) 1 Basic A0 9.47
SharePoint 4 A2 Standard 340.92
      £ 4696.95

A few points to note about the pricing shown in the Azure Preview Portal (and listed above):

  • The default pricing tier/specification of individual VMs in each “tier” is the same in both the “basic” and “high-availability” options.
  • As explained earlier in this post, the default pricing tier/specifications SQL Server and SharePoint VMs do not meet Microsoft’s minimum . For example, Web and Application servers require 12 GB RAM and 4 CPU cores per server, and the default pricing tier selected for those VMs (A2 Standard) provides 3.5 GB RAM and 2 cores. I expect the default specifications for Azure SharePoint Server Farm VMs to be insufficient per SharePoint 2013 Service Application resource requirements, even if that farm is intended for development or testing purposes. Of course, you can bump up those specifications at an additional cost.
  • SQL Server VM costs appear to include SQL Server licensing fees, whereas SharePoint VMs do not. This is reflected in the “choose your pricing tier” dialogue shown below.

“Choose your pricing tier” dialogue for SQL Server VMs


“Choose your pricing tier” dialogue for SharePoint VMs


Personally, I find it a little odd that you can’t change the SQL Server license that is applied when creating a SharePoint farm via the Azure Preview Portal. Although SQL Server Enterprise licensing is required for the “high-availability” option (per usage of a SQL Server 2014 AlwaysOn availability group), I can’t think why an Enterprise license would be required for the “basic” option, and imagine this choice significantly increases cost.

By the way, if you find yourself wondering how to reduce costs whilst a development or test environment is not in use, I have found the PowerShell cmdlet to be very useful. As noted in that article, shutting down all VMs in a cloud service releases the associated public virtual IP address, which may be a problem if you have public DNS infrastructure that points to that IP. In my case, this hasn’t been a problem as the Azure SharePoint Server Farm that I created is temporary in nature. Also note that stopping (de-allocating) VMs means that you won’t incur compute charges, but you will

Stopped (de-allocated VMs), after running Stop-AzureVM


For what it’s worth, I incurred a cost of just over £10 for “spinning up” a “high-availability” Azure SharePoint Server Farm, then de-allocating it right away using Stop-AzureVM. You can see in the chart below that the “OTHERS” category makes up a small percentage of the overall cost, which presumably includes storage. Remember that costs vary by region and by subscription, so your mileage may vary.

Cost of “spinning up” a “high-availability Azure SharePoint Server Farm” with default options selected


Wrap-up

That’s all for now. If you’d like to know a little more about the configuration of a “high-availability” Azure SharePoint Server Farm, feel free to download an SPSFarmReport that I ran post-creation.

]]>
/2014/07/20/introduction-to-basic-and-h-sharepoint-server-farms-in-microsoft-azure-iaas/feed/ 2
ULSViewer.exe download (MSDN archive version) /2014/05/26/ulsviewer-exe-download/ /2014/05/26/ulsviewer-exe-download/#comments Mon, 26 May 2014 20:45:13 +0000 /?p=430 17/09/2014 update: Microsoft have released a , which you might want to try instead of this one.

For reasons that are unknown to me, the has recently been taken down. That gallery contained ULSViewer.exe, a much-loved tool that no SharePoint guy or gal should be without. Although there are many versions of the tool out there in the wild, I believe this is the version originally created for Microsoft’s internal support teams by Dan Winter. I’m not sure if this is the “best” version as such, but it certainly works for me.

ULSViewer appears to be subject to the MSDN Code Gallery Binary License, meaning that we are free to install, use, copy and distribute the software. To my surprise, I couldn’t find the tool elsewhere online, so have uploaded it to this blog. Enjoy!

Download ULSViewer 2.0.3530.27850

ULSViewer

Ben

 

]]>
/2014/05/26/ulsviewer-exe-download/feed/ 6
SPC14 word cloud summary /2014/03/26/sharepoint-conference-2014-word-clouds/ /2014/03/26/sharepoint-conference-2014-word-clouds/#comments Wed, 26 Mar 2014 02:12:20 +0000 /?p=400 A couple of weeks back, I was lucky enough to be sent along to the Microsoft SharePoint Conference 2014 with a handful of my colleagues at . For me, this conference gave me a lot of confidence that we are implementing the right solutions for our clients that use SharePoint in its private (on-premises/managed hosting) and public (Office 365/) cloud flavours. This was my first SPC – so I can’t really compare it to previous events – but it was a blast!

By filtering my Twitter feed on the #spc14 tag, it’s easy to find a lot of decent technical session write-ups from the SharePoint community. With that in mind, I thought I’d take a slightly different tack and consider the message that Microsoft tried to get across at the conference. Given that I’ve only had time to review a handful of the 180+ slide decks, I’m hardly in a position to provide a broad summary just yet, but I thought some form of automated PowerPoint review might provide an interesting high-level overview of the topics that were discussed.

I started by creating a word cloud using the text contained in all SPC14 PowerPoint presentations (over 3,000 slides!). That process produced a bunch of noise words which I removed, resulting in this broad overview:

SPC14 word cloud with noise words removed

SPC14TagCloudSharePoint

As you might expect, the words “SharePoint” and “Microsoft” dominate this cloud, so my next step was to remove those terms. Now, the emphasis on Office, Search and Yammer is immediately noticeable, followed closely by Windows, App(s), Cloud, Web, Server and Content:

SPC14 word cloud with “SharePoint” and “Microsoft” removed

SPC14TagCloudSharePointMicrosoftRemoved

Since this is mainly an infrastructure-focussed blog, I also ran through the above process using all PowerPoint decks from the IT PRO track. This time, the words Office and Yammer have slightly less emphasis, but Windows, Azure and SQL are vying for your attention. We also see other topics such as Identity, Directory and Hybrid start to creep in:

SPC14 IT PRO track word cloud with “SharePoint” and “Microsoft” removed

SPC14TagCloudSharePointMicrosoftRemoved_ITPRO

None of this is really surprising for current SharePoint practitioners, as many of us have spent the last twelve months or so getting to grips with technologies such as Windows/Microsoft Azure and Yammer. It does remind me how rapidly things are changing for us SharePoint people: two years ago, I hadn’t heard of Yammer. Today, we use Yammer internally, and I favour it over email for many tasks (particularly those where the aim is to “crowd source” information rather than action something specific). Similarly, Azure wasn’t really in the frame for SharePoint hosting back then: today, it is being for on-premises SharePoint 2013 and it acts as the primary hosting environment for some organisations (particularly for dev/test platforms). In the future, I plan to carry out a similar comparison against these word clouds to see how Microsoft’s messaging – and the plethora of products that we need to understand to do our jobs – changes over time.

Just in case anyone fancies doing their own analysis on the text files that I used to produce these word clouds, I’ve attached them to this post:

Ben

]]>
/2014/03/26/sharepoint-conference-2014-word-clouds/feed/ 3
Using host-named site collections in SharePoint 2013 with MySites /2013/12/11/using-host-named-site-collections-in-sharepoint-2013-with-mysites/ /2013/12/11/using-host-named-site-collections-in-sharepoint-2013-with-mysites/#comments Wed, 11 Dec 2013 00:26:32 +0000 /?p=296 Although these guys have been around since WSS 3.0, host-named site collections haven’t received a great deal of attention up until the last year or so. Having previously worked at a small SharePoint hosting company, I’ve always found this slightly surprising; we preferred to use host-named sites over their path-based counterparts due to the huge scalability they offered us when creating “vanity” URLs for customers. In WSS 3.0 (the “Foundation” version of SharePoint Server 2007, for the 2010 and 2013 folks out there), we could create up to 150,000 site collections per Web application, vs. a documented limit of  In reality, SharePoint 2007 farms would often start to creak at the seams way before that 99 Web Application limit was reached, and this was reflected in subsequent product versions (Microsoft recommend no more than in SharePoint Server 2013). This underlines the point that site collections are the unit of scale in SharePoint, and host-named site collections mean that vanity URL requirements alone may not provide sufficient justification for multiple Web Applications.

Fast forward to today and host-named sites have hit the big time, and they are a key component of Office 365. Microsoft aren’t shy about admitting this – in fact, host-named sites are now the preferred deployment method in SharePoint 2013. However, as with most capabilities in the SharePoint world, the decision to use host-named sites isn’t the no-brainer that TechNet might want you to believe. It’s a good thing, then, that there are a bunch of great posts out there already for you to digest if you want broader coverage than this post offers (here we are mainly addressing MySites):

  • , by Steve Peschka
  • , by Kirk Evans
  • , by Wictor Wilén

So why have I bothered writing yet ANOTHER article regarding host-named site collections, I hear you quite rightly ask? In truth, all the information contained herein is out there already, but I had a couple of very specific, “nuts and bolts” type questions that I have been asked by some of our clients and colleagues who have tried, with varying degrees of success to implement host-named site collections. I figured it would be worth stepping through the answers to these questions by scenario, the first being usage of MySites within Microsoft’s . In this architecture, one of the most significant departures from traditional SharePoint deployments is that Microsoft recommend a single Web Application for the entire farm where possible, excluding SharePoint Central Administration. This post isn’t intended to answer the wider question as to whether a single Web App is a good idea – it simply covers a couple of implementation details that may help you out if you plan to pursue this option.

Can MySites be host-named?

I’ll cut to the chase – the short answer is yes, but possibly not in the way you might expect (we will dig into that below). This is despite the fact that according to Microsoft, the OOB . My best guess at an explanation is that one of three MySite instantiation timer jobs actually carries out the work of creating a user’s MySite, as opposed to the synchronous process that runs when you create a “regular” self-service site. As far as I can tell, however, Self Service Site Creation does need to be enabled for MySite instantiation to work.

MySite Instantiation Request Queue

MySite instantiation jobs in SharePoint Server 2013

Why would I want a host-named MySite anyway?

To answer this question, we need to take a step back for a moment and review Microsoft’s of a host-named site. As far as I can tell, there are (unofficially, my own terminology), two types:

  • A “traditional” host-named site collection that looks and smells like a Web Application Public URL, or IIS host header binding (but is NOT the same thing). Example: https://bathawes-my.sharepoint.com. I’m going to call this a “root” host-named site in this post.
  • A “host-named site collection created at a managed path”. Unfortunately, these look very much like the path-based site collections that we know and love, but there is a difference: they should be created under a managed path created specifically for host-named sites. Example: https://bathawes-my.sharepoint.com/personal/ben_bathawes_com. I’m going to call these “child” host-named sites in this post.

While I’m at it, I’m also going to start abbreviating “host-named site collection” to HNSC every so often for brevity. About time, right? :)

Back to the question at hand: it probably makes no sense to use a “root” HNSC for individual user MySites per DNS requirements, but a “child” HNSC does the job nicely, and appears to align with Microsoft’s recommended architecture for host-named sites. Microsoft provide a simple PowerShell script to help you work this out for yourself – below is an example from one of my dev VMs. Using the “unofficial” terminology I have defined above, note that:

  1. https://sharepointhosting.bathawes.com is the only path-based site collection in the farm, created at the root of the Web Application. This site collection is required for search crawls to function correctly.
  2. https://my.bathawes.com is a “root” HNSC.
  3. /personal is a Managed Path created for host-named sites (we know this because the -HostHeader parameter was specified when using New-SPManagedPath).
  4. https://my.bathawes.com/personal/administrator and https://my.bathawes.com/personal/sp2013_install are “child” host-named sites.

MySites are child HNSC

Illustration of “root” and “child” host-named sites.

The section below provides the script I used to configure this environment so you can test this yourself.

Create a Web Application for host-named MySites with PowerShell

I used various sources to put together the script below, but the two articles I should call out are Spencer Harbar’s article on and Steve Peschka’s article on  I’ve made a few tweaks here and there to take into account Microsoft’s strong recommendation to use SSL for SharePoint 2013 Web Applications, and automate creation of the MySite host. There are a few assumptions that you should be aware of before running the script:

  1. I assume that a User Profile Service Application has been created, and that the MySite host has been set to the correct URL.
  2. I assume that you want to use SSL per Microsoft guidance, and have a valid certificate.
  3. I’m not fond of using a server’s machine name as the URL of a Web Application, primarily because there will be more than one server in almost all SharePoint deployments. I’m not 100% sure that changing the URL to something more friendly (by passing the -Url parameter to New-SPWebApplication), is supported in this architecture for host-named sites, but I haven’t had any problems so far in my development environment, so assume for now that it is.
<# Sets up a SharePoint 2013 Web Application for hosting host-named site collections per http://technet.microsoft.com/en-us/library/cc424952.aspx
#>

<# App Pool details
#>
$appPoolName = "SharePointHosting"
$appPoolUserName = "bathawes\SPHosting"
$ownerAlias = "bathawes\sp2013_install"
$ownerEmail = "[email protected]"

<# Web App details
        Note that the Web App URL is HTTPS per SSL guidelines from Microsoft
#>
$hostingMainURL = "https://sharepointhosting.bathawes.com"
$webAppName = "SharePoint Hosting"
$contentDBName = "SharePoint_Content_Hosting"

<# Host-named site collections
        Ensure that the MySite Host URL is configured correctly within the User Profile Service, under the "Setup My Sites" link in SPCA
#>
$mysitehost = "https://my.bathawes.com"

$managedAccount = Get-SPManagedAccount $appPoolUserName

<# Create a new Web App using Windows Claims (Windows (NTLM))
      The -Url parameter specifies the Default Public URL. Otherwise, the machine name must be used when creating the root (path based) site collection
      The -SecureSocketsLayer is only required if using SSL
      Also changed -Port to 443
      When the Web App is created, ensure that an appropriate certificate is bound in IIS
#>
$authenticationProvider = New-SPAuthenticationProvider

write-host "Creating Web Application for host-named site collections at $hostingMainURL..."
$webApp = New-SPWebApplication -ApplicationPool $appPoolName -ApplicationPoolAccount $managedAccount -Name $webAppName -Port 443 -AuthenticationProvider $authenticationProvider -DatabaseName $contentDBName -Url $hostingMainURL -SecureSocketsLayer

<# Sometimes, the New-SPSite cmdlet reports that a path-based site already exists if it is run immediately after creating the Web App, so sleep for a minute
#>
write-host "Web App created" -foreground "green"
write-host "Sleeping for a minute before creating the root path-based site collection..."
Start-Sleep -s 60

<# Create path-based Site Collection at the Web App root. This won't be accessed by users but is required for support.
#>
New-SPSite -Url $hostingMainURL -owneralias $ownerAlias -ownerEmail $ownerEmail

# Enable self-service site creation for MySites
$webapp = Get-SPWebApplication $hostingMainURL
$webapp.SelfServiceSiteCreationEnabled = $true
$webApp.Update()
write-host "Self-service site creation enabled successfully..." -foreground "green"

<# Removing the existing /sites path-based managed path per http://blogs.technet.com/b/speschka/archive/2013/06/26/logical-architecture-guidance-for-sharepoint-2013-part-1.aspx
#>
$sitesManagedPath = Get-SPManagedPath sites -WebApplication $hostingMainURL
if ($sitesManagedPath -ne $null) {Remove-SPManagedPath sites -WebApplication $hostingMainURL -confirm:$false}
write-host "Removed /Sites path-based managed path..." -foreground "green"

<# Create MySite Managed Path (a managed path for use with HNSC, so ONE per farm)
#>
$personal = Get-SPManagedPath personal -hostheader 
if ($personal -eq $null) {New-SPManagedPath personal -HostHeader}
write-host "Created /Personal managed path for MySites..." -foreground "green"

<# Create the MySite Host
#>
New-SPSite -Url $mysitehost -owneralias $ownerAlias -ownerEmail $ownerEmail -HostHeaderWebApplication $hostingMainURL -Template SPSMSITEHOST#0
write-host "Created MySite host at $mysitehost..." -foreground "green"

$webApp = Get-SPWebapplication $hostingMainURL

<# Confirm that the correct sites have been created
        From http://technet.microsoft.com/en-us/library/cc424952.aspx#section3a
#>
write-host "Confirming the site collections that we created within $hostingMainURL :"
$webApp = Get-SPWebapplication $hostingMainURL

foreach($spSite in $webApp.Sites)
{
if ($spSite.HostHeaderIsSiteName) 
{ Write-Host $spSite.Url 'is host-named' -foreground "green"}
else
{ Write-Host $spSite.Url 'is path based' -foreground "red"}
}

write-host "Done!" -foreground "green"

Below are a couple of screenshots of my farm after running the above script. Usage of a non-standard port for SPCA (2013 in this case) is irrelevant to this discussion and – it’s just how this dev VM is configured:

HNSC Web App in SPCA

HNSC Web App in IIS

As SPCA only appears to be aware of path-based Managed Paths, the /Personal Managed Path for host-named sites doesn’t appear. Also note that the script above removes the default /Sites path-based Managed Path, as it is not required for HNSC:

 Path-based sites in SPCA for HNSC Web App

07/01/2014 update: if no path-based Managed Paths are defined for a Web Application, you will see the error below when attempting to create a site collection from within SPCA. This is due to the fact that SPCA can only create path-based sites.

NoInclusionsDefinedForPathSiteCreation

Below, I have enumerated the host-named and path-based Managed Paths in the same farm and Web Application using PowerShell. This time, /Sites and /Personal do appear, as they are Managed Paths created using the -HostHeader parameter:

Host-named vs. Path-based Managed Paths

This sounds awesome! Why would I use any other approach?

Although consolidating to a single Web Application has potential performance, administration and future support benefits, there are a few significant trade-offs. For a start, we will probably need to turn to  or PowerShell to ensure that site collections get created in the “correct” SharePoint Content Database. We also need to be comfortable with the idea that all Web Application scoped options apply to all site collections in the entire farm. This includes security policies, web.config changes (such as those required to configure BLOB Caching), and Service Application connections amongst others. We also lose a couple of options when moving to a host-named site model, the most well-known being  (except, apparently for MySites!). As I say, this post isn’t really about prescribing a model, but I wanted to flag those considerations so they are out in the open.

Ben

]]>
/2013/12/11/using-host-named-site-collections-in-sharepoint-2013-with-mysites/feed/ 3
How to renew your ADFS 2.0 token signing certificate in SharePoint /2013/07/31/how-to-renew-your-adfs-2-0-token-signing-certificate-in-sharepoint/ /2013/07/31/how-to-renew-your-adfs-2-0-token-signing-certificate-in-sharepoint/#comments Wed, 31 Jul 2013 18:49:18 +0000 http://bathawes.com/?p=88 Over the past year or so,  have found that Active Directory Federation Services (ADFS) has become a more common requirement for both cloud and on-premises SharePoint deployments. Although we find that it is often implemented to facilitate single sign on across otherwise disconnected infrastructure, we have also deployed it to support claims augmentation for SharePoint environments that utilise SAML claims. As such, we have built up a fair chunk of experience deploying and operating ADFS in both production and our own internal development environments.

ADFS uses various certificates to secure communications and facilitate authentication, and this post is focussed on the token-signing certificate.

Note that this post is NOT intended to provide , or . The aim is to explain why certificate renewal is necessary, and describe how to do it with ADFS 2.0 and SharePoint Server 2010. Having said that, I imagine the steps would be identical in SharePoint Server 2013, and perhaps ADFS v2.1 too.

 Is this relevant to me?

If you look after a SharePoint environment that relies on ADFS 2.0 for authentication, then this post is relevant to you. By default, the ADFS token signing certificate is configured to expire 1 year after ADFS is first installed. When that happens, the new certificate needs to be re-imported in to SharePoint’s trusted identity provider, and be trusted by SharePoint. If these steps are not followed, all Web application zones that rely on ADFS for authentication will be unavailable. If your ADFS token signing certificate has already expired, then SharePoint is most likely unavailable and you will most likely find the following error in the event log on your SharePoint server(s):

An operation failed because the following certificate has validation errors:nnSubject Name: CN=ADFS Signing – adfs.domain.comnIssuer Name: CN=ADFS Signing – adfs. domain.comnThumbprint: F8CDCC978D4A816713754663A56C102B72580CFEnnErrors:nn The root of the certificate chain is not a trusted root authority..

If you aren’t sure whether a SharePoint Web application is using ADFS, here is an example of the “Authentication Providers” screen within SharePoint Central Administration, for a SP2010 Web App relying on ADFS. The fact that the “Trusted Identity Provider” box is checked is a pretty strong indication that ADFS is in use. Note that your provider is unlikely to be called “ADFSv2”, as the name is configured at the point of creation:

If you have access to the ADFS server, you can view certificate expiry dates under ADFS 2.0 > Service > Certificates:

What is an ADFS token signing certificate, and why would it expire?

Technet concisely  of the ADFS token signing certificate:

“Federation servers require token-signing certificates to prevent attackers from altering or counterfeiting security tokens in an attempt to gain unauthorized access to federated resources…The Web server in the resource partner uses the public key of the token-signing certificate to verify that the security token is signed by the resource federation server.”

My interpretation of this is that by importing the ADFS token signing certificate, SharePoint (the Web server) is able to verity that the certificates are signed by ADFS (the resource federation server).

As for “why would it expire”, common security guidelines for certificate management state that the shorter the lifetime of a certificate, the more frequently the identity of the signer is verified. To me, a year of validity seems to be a fairly sensible duration for a production deployment, but this duration may not be appropriate for less critical systems such as development and test environments.

Note that in a default configuration, expired certificates are automatically replaced by ADFS, due to usage of a feature known as auto-certificate rollover. The problem here is that relying parties (such as SharePoint) need to be made aware of the new token-signing certificate.

How do I renew the token-signing certificate in SharePoint?

There are two steps required to renew the certificate (at least as far as SharePoint is concerned – this assumes that the new ADFS token signing cert has already been generated):

  1. Import certificate into SharePoint’s trusted certificate store (SharePoint Central Admin or PowerShell)
  2. Import certificate into SharePoint’s trusted identity provider (PowerShell)

The PowerShell required to perform the above steps forms part of the overall process followed to , so if you have configured SharePoint for ADFS before this is nothing new. This script needs to be run on a SharePoint server:

# Find the ADFS token signing cert$cert= New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:ADFSTokenSigning.cer“)# import cert to trusted root authority store in SharePointNew-SPTrustedRootAuthority -Name “ADFS Token Signing” -Certificate $cert# import cert to SP-TrustedIdentityTokenIssuer

get-SPTrustedIdentityTokenIssuer | Set-SPTrustedIdentityTokenIssuer -importtrustcertificate $cert

Note that it doesn’t appear to be necessary to remove the previously used certificate (SPTrustedRootAuthority), and Set-SPTrustedIdentityTokenIssuer overwrites the previous token signing certificate. Additionally, an IISReset was not necessary when testing in my environment.

I don’t want to do this every year. How do I stop the certificate from expiring?

At Content and Code, we have a lot of development VMs that rely on ADFS. In this scenario, it’s quite possible that ADFS token signing certificates should never expire, as the security risk is minimal or non-existent. ADFS has the capability to generate its own certificates (in which case you should follow the steps below), or you could import a certificate generated externally (for example, you might decide to issue a new certificate using a certificate authority within the domain). If you decide to generate a certificate outside of ADFS, you may want to review the 

Assuming that you are using ADFS to generate the new token signing certificate, you can use the Set-ADFSProperties cmdlet to modify the CertificateDuration property, then create a new token signing certificate. In the example below, new certificates won’t expire for 36500 days (100 years):

Set-ADFSProperties -CertificateDuration 36500

Note that this needs to be run on the ADFS server. If you aren’t familiar with using the ADFS PowerShell cmdlets, I suggest running “Windows PowerShell Modules” as administrator to get started:

If you are the cautious type, you can run Get-ADFSProperties to check the current certificate duration before changing it. You will probably find that you ADFS server is set to the default value of 365 days, but in this case I have already changed the value to 36500 using the script above:

We can now create a new Token Signing certificate that will be valid for the new duration:

Update-ADFSCertificate -CertificateType Token-Signing -Urgent

By including the –Urgent parameter, we are triggering immediate certificate rollover, meaning that any reliant parties will need to be updated with the new certificate before authentication via ADFS can occur. In other words, the cmdlet above will break authentication for all SharePoint Web Application zones using ADFS until we have imported the new certificate. Remember, this needs to be run on the ADFS server.

Having completed this step, you should now find that the token signing certificate within ADFS is valid for 100 years:

Optionally, you may wish to disable auto-certificate rollover completely in your development environments. This PowerShell script will do just that:

Set-ADFSProperties -AutoCertificateRollover $false

Obviously having done this, you will have to renew your ADFS certificate manually.

What about the other ADFS certificates?

You might have noticed that there are three types of ADFS certificate presented in the ADFS 2.0 UI:

I haven’t had a chance to investigate how the Service communications and Token-decrypting certificate are used in the context of SharePoint. For what it’s worth, I did  in my environment and did not notice any obvious availability problems within SharePoint. I do however advise treading very carefully, especially given the heavy reliance that SharePoint places on the token-signing certificate.

Perhaps the other ADFS certificates will be the topic of another blog post. Thanks for reading!

Ben

]]>
/2013/07/31/how-to-renew-your-adfs-2-0-token-signing-certificate-in-sharepoint/feed/ 4
Resolving partial encryption problems with BitLocker /2013/03/17/resolving-partial-encryption-problems-with-bitlocker/ /2013/03/17/resolving-partial-encryption-problems-with-bitlocker/#comments Sun, 17 Mar 2013 21:37:49 +0000 http://bathawes.com/?p=86
As illustrated in this blog post, encryption can result in irrecoverable loss of data. It is strongly recommended that you take a backup before using BitLocker to encrypt existing data. The approach outlined here worked for me but you may not be as lucky. Use at your own risk!
As an IT consultant, I have a firm requirement to carry a bunch of software and virtual machines around with me on a regular basis. Although a lot of this information is stored on my laptop, I also have numerous high-capacity USB drives that I use to store backups of key information. Whilst this data isn’t especially sensitive, I’d probably lose a little sleep if I were to misplace one of those drives. I recently started to view Bitlocker in my recently-purchased copy of Windows 8 Pro as a solution to this problem.

Rather than spend lots of time researching the technology, I uncharacteristically jumped head first into the world of Bitlocker and clicked “Turn BitLocker on” for the drive mentioned below:

As you will read shortly, this didn’t go too well but before we get into the problem/solution, I’d like to describe the context first. I would suggest you read this even if you are already in a bad situation and are looking for a quick solution (e.g. your data is partially encrypted with BitLocker and you can’t access it), as there are some useful articles mentioned here that might help you understand the issue.

The example hard drive (victim) used in this blog is…

  •  Note that this is a USB 2.0 drive and is notan SSD which might explain the slow encryption/decryption times.
  • removable data drive, meaning that it does not contain any OS or system data. This is relevant in that encrypting an OS drives requires usage of a TPM and/or startup key stored on a USB flash drive
  • Secured using the  unlock method – I would consider a more secure option if the data were more sensitive (see below).

The drive has a single volume stored as an NTFS file system:

Scope of this blog

It’s important to note the narrow scope of this post – we are only discussing usage of Bitlocker to encrypt aremovable drive using a password with Windows 8 Pro. This is perhaps one of the “simplest” options and probably one of the most likely that consumers will choose as specialist hardware isn’t required and everyone understands and uses passwords (despite the growing fear that ). There are a that would need to be considered if rolling out to an Enterprise that might have specialist hardware available and more stringent security requirements, but the options I have selected are probably “good enough” for the data contained on my personal external USB drive:

  • Encryption/cipher strength:
    • 128-bit AES with Diffuser algorithm (default option in Windows 8 Pro, which is what I’ve stuck with for no reason other than simplicity)
    • 128-bit AES without Diffuser algorithm
    • 256-bit AES with or without Diffuser algorithm
  • Drive type:
    • Operating system/system
    • Removable data drives – the subject of this blog
    • Fixed data drives
  • Unlock method – options differ drastically if encrypting an OS volume:
    • OS drives:
      • TPM only
      • TPM + PIN
      • TPM + startup key
      • TPM + PIN + startup key
      • Startup key only
    • Removable or fixed data drives:
      • Password – this is the method I’ve used for this blog. I don’t have/need a Smart card infrastructure.
      • Smart card
      • Automatic unlocking

There are also a bunch of other more granular options that could be implemented depending on what level of security is required. For example, it is possible to 

The scenario/problem:

  • I attempted to encrypt the drive using the Password unlock method from the Windows 8 Pro UI.
  • After around 8 hours, the encryption process appeared to be stuck at 94%. The physical drive was also producing a slightly alarming clicking sounds that I have not got to the bottom of yet (presumably indicating a hardware fault).
  • I clicked “Pause”. The encryption dialogue locked up and after an hour or so I attempted a reboot.
  • Subsequent attempts to access the drive failed- upon entering the (correct) Bitlocker password the UI would freeze. In short, the drive and data were no longer accessible.

It’s probably worth re-stating the obvious here: if you don’t have either the password, recovery password, or recovery key, no solution will restore access to your data. It’s  because this would require cracking 128-bit or 256-bit AES encryption.

Even if you do have one of the aforementioned recovery items, we are still in a pretty bad situation. Encryption is only partial and we can’t interact with the drive via the UI. There is no guarantee that the BitLocker Repair Tool will get your data back in the same way it did for me.

Caveats out of the way, let’s move on…

Solution requirements

  • The password, recovery password or the recovery key for the encrypted volume. Note that various article and forum posts suggest that the password alone is not sufficient (stating that some combination of the recovery password or key are required in order to repair a Bitlocker volume) – thepassword alone was sufficient to repair my drive in this case (I assume this changes if using another drive type and/or unlock method)
  • A volume with at least as much space free as the partially encrypted volume. This can be a partition on an external or internal drive, although be prepared to remove any existing data before following this process (if the decryption process is successful, data on this volume is removed). Contrary to what various knowledgebase articles indicate, a secondary USB drive is NOT required for the scenario described here. As far as I can tell you just need a spare, empty partition that is at least as large as the Bitlocker encrypted drive.

Solution steps

The steps described below involve usage of the to decrypt data held within the inaccessible volume. This is included with Windows 8 Pro but may need to be downloaded if using an earlier OS. Note that the process is simplified because a.) I chose the password unlock method and b.) we are repairing a non-OS volume (negating the need to copy the repair tools to a location that is accessbile during start up).

  1. Create a 1TB partition dedicated to storing decrypted information during repair (I.e. the drive should be formatted  before following the process below)
  2. Use the  targeting the spare, empty partition. I ran “repair-bde encrypteddriveletter: emptydriveletter: -password” (you will be prompted to enter the password used to lock/unlock the volume)
  3. Decryption will probably get stuck at the same point as encryption (in my case 94%), at which point hit cntrl+c at the command prompt to interrupt the decryption process

In my case, my decryption log file looked like this:

LOG INFO: 0x0000002aValid metadata at offset 8832512000 found at scan level 1.LOG INFO: 0x0000002b

Successfully created repair context.

LOG ERROR: 0xc0000037

Failed to read sector at offset 9211592704. (0x00000017)

LOG ERROR: 0xc0000037

Failed to read sector at offset 9211593216. (0x00000017)

…followed by around 20 similar entries that differed only by the offset value

Your data should now be decrypted on the original problematic volume, and the new drive will contain the partially decrypted files (unless the process completes to 100%, in which case these files will be removed).With any luck, you should now be able to view your unencrypted/insecure files on the problematic drive. Hoorah! J

What went wrong?

I can’t be 100% sure but my best guess is that my external USB drive is suffering from a hardware fault, meaning that sectors located somewhere near the end of my drive are inaccessible. This is based on the decryption log (showing failure to read sectors at a late offset) and the scary clicking sound that I mentioned earlier.

Lessons learned

  • The most important lesson of all here is to back up the data that you wish to encrypt before starting the encryption process. As shown here, if something goes wrong it might be difficult or impossible to recover your data (I was lucky).
  • You do not always need a recovery password or package to decrypt/repair a drive – just the original encryption password worked in my case (this depends on the unlock method chosen in the first place). As an aside, note that you can backup your recovery key to a Microsoft account (i.e. store it in the “cloud”)
  • Encryption can takes ages! This drive took around 8 hours.
  • The decryption process doesn’t necessarily need to hit 100% in order to get data back, especially if encryption didn’t finish in the first place (Bitlocker encrypts/decrypts on a per file basis as opposed to encrypting or decrypting an entire drive in one operation). Note that if decrypting to an image file (using “pathimagefile.img” as the OutputVolumeOrImage parameter), “partial” decryption may not succeed (I originally tried this option without success – I hit 94%, stopped decryption using cntrl+c and the image file appeared to be unusable/corrupt).
  • Multiple partitions are not required if encrypting a removal data drive. I would guess this is also the case for an internal data drive but haven’t tested this.
  • It’s much quicker to “encrypt” used space only as opposed to encrypting a full drive, especially if said drive already contains data. See the screenshot and notes below.

My revised (more precautious) approach to implementing BitLocker on new removable drives

For what it’s worth, the approach I’ve selected for my personal data (which isn’t particularly sensitive) is:

  • Backup (copy) data somewhere else and test the backup works (i.e. Try to open some files)
  • Format drive that will be encrypted using BitLocker
  • Run  to ensure there are no bad sectors
  • Turn on BitLocker on formatted drive, opting to encrypt “used disk space only” (see screenshot below, this is only really appropriate if encrypting a new drive)
  • Copy files back to the BitLocker-enabled drive
  • (optional) optimise drive using your favourite disk tool, e.g. PerfectDisk
  • Reboot machine and ensure you can still access files
  • Lock/unlock drive and ensure you can still access files
  • When happy remove backup files

This approach has worked well for me for the last week or so with no issues thus far. Having said that, I’m new to the BitLocker world so would welcome any further thoughts in the comments.

Suggested further reading

Ben

]]>
/2013/03/17/resolving-partial-encryption-problems-with-bitlocker/feed/ 3
Configuring Active Directory Import for a SharePoint 2013 User Profile Service Application using PowerShell /2013/01/15/configuring-active-directory-import-for-a-sharepoint-2013-user-profile-service-application-using-powershell/ Tue, 15 Jan 2013 21:30:31 +0000 http://bathawes.com/?p=83 Writing an IT PRO focussed blog post on any aspect of the User Profile Service in SharePoint is tough as there is a good chance that someone like  will come along and write a better/more informed one. However, whilst configuring a new SharePoint 2013 environment today I found myself wondering how one automates configuration of the “new” Active Directory Import mode – there doesn’t appear to be much out there on Technet. I figured a quick post would be useful in the absence of more detailed information.

Active Directory Import is similar to the Profile import mechanism we had back in SharePoint Server 2007. It’s an awful lot easier and quicker to configure than “SharePoint Profile Synchronisation”, AKA “User Profile Synchronisation Service” (in services on server) AKA ForeFront Identity Manager (FIM) for SharePoint Server 2010. The profile import itself is also very fast in comparison but there is not feature fidelity between the two options – one significant drawback, for example is that it isn’t possible to export properties from SharePoint to AD. Anyway, I suggest you read through the other  out there on AD Import as this post isn’t meant to be an introduction to the capability.

Note that I have heard that changing from/to AD Import mode after user profiles have been imported is not a good idea. I haven’t explored the detail of this yet so for now my suggestion is “assume it’s a pain to change later”.18/04/2013 Update: although you can switch between the two different import modes via Central Admin, it will appear that any existing Sync connections are lost. This is because Sync connections are stored in either the UPA Sync database (if using “FIM” import mode), or in the UPA Profile DB (if using Active Directory Import). AFAIK there is no supported means of migrating Sync connections between the two databases, meaning that the upshot of all this is that you will need to re-create any existing Sync connections when switching import modes. This could be a pain if you have a “complex” Sync connection config – perhaps you have very granular AD OU selections for a large domain – especially given that  (and still is AFAIK).

​Import mode DB that stores Sync Connections​
SharePoint Profile Synchronisation (FIM)​ ​UPA Sync
AD Import​​ UPA Profile​

To illustrate this, here is the Synchronisation Connections screen after switching import modes. As this is the first time I have used SharePoint Profile Synchronisation in this case, I don’t have any connections:

EmptySyncConnections.PNG

…if I switch back to AD Import, I get my Sync connection back (the connection was not deleted – it’s just that Sync connections created in AD Import mode are stored in the Profile DB, and connections created in “FIM” mode are stored in the Sync database):

PopulatedSyncConnections.PNG

Here is the Sync connection in the ADImportDCMapping table of the UPA Profile DB (used if you are in AD Import mode):

ProfileDBSyncConnection.PNG

…and in case you are wondering, here is a Sync connection in the mms_management_agent table of the UPA Sync DB (used if you are in FIM mode):

SyncDBSyncConnection.PNG

One other little nugget I can offer is that your Synchronisation Connections will not appear if the User Profile Synchronisation Service is stopped whilst in SharePoint Profile Synchronisation​ / FIM mode. This is because stopping the UPS deprovisions the synchronisation service, but does not delete any data in your UPA databases. To get your connections back, you will need to re-provision (start) the User Profile Synchronisation Service if in FIM mode.

24/04/2013 update: having reviewed a SPC 2012 session entitled “Working with User Profiles in SharePoint Server 2013” presented by Sheyi Adenouga and KC Cross Rowley​, it appears that you may also need to run the Set-SPProfileServiceApplication cmdlet with the parameter to clear up any discrepencies that may exist after switching import modes (using PowerShell):

​​​Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true
Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true​

Note that I haven’t tested this extensively, and the original point still stands – switching import modes is a bit of a pain and you should therefore plan accordingly by ensuring the selected import mode meets your requirements.

Enabling AD Import mode can be achieved via SPCA following UPA creation within “Configure Synchronization Settings”. You can happily change the setting in the UI (although I have had occasional issues with the relevant JavaScript not firing):​

 

However, setting the option via PowerShell does not appear to be well documented. I scanned the properties of my UPA and stumbled upon “NoILMUsed”. The top search result for that () stated in essence that the property is for Microsoft internal use only (in the context of SP2010). Not a good start.

However, looking a little further I found this article:

Although the context of this support article in itself is quite interesting – it looks as though removing sync connections whilst in Active Directory Import mode is problematic – there is a gem sat within the “More information” section. According to that article, the following script snippet can be used to enable AD Import mode in SharePoint Server 2013:

$upa=Get-SPServiceApplication -Name “UserProfileServiceAppName”$upa.NoILMUsed=$true$upa.Update()

I added this in to my UPA creation script (which is a modified version of that provided in  ) and have since successfully tested the cmdlet a handful of times in SP2013 RTM.

Oh, and in case you are wondering, the “User Profile Synchronization Service” does not need to be started when using AD Import (hoorah!):

 

Ben

]]>
Using SPWebService.FileWriteChunkSize to turn off Shredded Storage in SharePoint 2013 RTM /2013/01/07/using-spwebservice-filewritechunksize-to-turn-off-shredded-storage-in-sharepoint-2013-rtm/ /2013/01/07/using-spwebservice-filewritechunksize-to-turn-off-shredded-storage-in-sharepoint-2013-rtm/#comments Mon, 07 Jan 2013 23:00:42 +0000 http://bathawes.com/?p=81
14/11/2013 update:​ Chris Mullendore, a Microsoft PFE has written a ​ that discusses both Shredded Storage and RBS. He whole-heartedly recommends using the default FileWriteChunkSize settings. I’ll leave this blog up just to illustrate that it is possible to modify this value, but it appears to be one of those “just because you can, doesn’t mean you should” settings.

Over the last few weeks​ I’ve been looking at some of the new capabilities in SharePoint 2013 from an infrastructure perspective, focusing mainly on search and the topic of this blog post: Shredded Storage.

There are already a number of posts on this feature that provide a good introduction. I won’t rehash those and will instead link you to a couple of decent ones:

I’ll reserve my own opinion on Shredded Storage for now as I haven’t had sufficient time to test it. The purpose of this post is simply to demonstrate how to turn it off. That does not mean I recommend turning it off – you will need to review the benefits and drawbacks (most likely starting with the posts above) and decide whether it is appropriate for your usage scenario. For what it’s worth, I think the vast majority of collaboration sites will keep Shredded Storage on due to reduced storage cost and the IO benefits it provides whilst using versioning.

Turning it off is straightforward using PowerShell, but it took me a few (failed) attempts to realise that the SPWebService object needs to be updated once the FileWriteChunkSize property has been modified:

 
$wa = Get-SPWebApplication 
$wa.WebService.FileWriteChunkSize = 1073741824
$wa.webservice.update()

In the example above I have set the FileWriteChunkSize to 1 GB (specified in bytes), effectively disabling Shredded Storage for the vast majority of content.

Just to prove this works, here are a couple of screenshots:

Setting FileWriteChunkSize to 1GB.png

 

Setting FileWriteChunkSize to 1GB to prevent BLOBs from being shredded

File properties.png
An example BLOB – in case you are wondering it’s a video I took whilst watching sumo wrestling in Japan :-)
Non-shredded BLOB.png

 Querying SQL to show that the BLOB has not been shredded

For clarity, I used SharePoint Server 2013, version 15.0.4420.1017 for this test.

I find it slightly troubling that the original enumeration was seemingly disabled for the RTM release given that it supposedly worked in the Release Preview (I have not verified this). It leaves me wondering whether Microsoft will allow us to toggle Shredded Storage on/off in future releases.

For now, we have a choice as to whether content is shredded which is a good thing. Obviously that means that there is more to think about in terms of storage when compared to SharePoint 2010, especially if considering BLOB externalisation (i.e. RBS). This may well be the subject of a future post.

Ben

]]>
/2013/01/07/using-spwebservice-filewritechunksize-to-turn-off-shredded-storage-in-sharepoint-2013-rtm/feed/ 2