Archive

Archive for the ‘SharePoint 2013’ Category

Adding multiple versions of the same assembly to the same single SharePoint WSP

This discovery came about when we wanted to simply deploying NLog with our Visual Studio 2013 SharePoint project components, but didn’t want to bundle it with any particular component, since removing that component would remove NLog from the GAC for all of them, even if they had each placed it there because its shared.  So, like lots of people we made up a SharePoint project to contain this dependency, among others, which was fine until we upgraded NLog versions.

In doing so we started to break older components that were looking for NLog 2.0 after we swapped out for NLog 3.2, and although we could have used assembly remapping or upgraded all of the components, neither modifying all the config files (web apps, and timer service, and tools, etc) or redeploying all the components was considered a nice transitional solution.

So, simply adding both DLLs into the SharePoint Package > under Advanced > additional assemblies looks straight forward.  A dependency folder was added to the solution, with a sub folder for each version, and both of those versions were added as additional assemblies to be deployed to the GAC.  Both were added, from different ‘Source Paths’ (“..\Dependencies\NLog_2.0\NLog.dll” and “..\Dependencies\NLog_3.2\NLog.dll”), both deployed to the GAC, and both with the same default ‘Location’ value of “NLog.dll”.

Publish… and Fail…

Package.package : error SPT16: Both “MySharePointWSP.Logging” and “MySharePointWSP.Logging” contain a file that deploys to the same Package location: NLog.dll

HUH????

So I search online for that type of error, and found little, apart a hint about making the Location of each match the Source Path… well the VS SharePoint dialog didn’t like that at all… but after some experimenting I noticed something… it was ok with partial paths, and that these paths were used for the GAC sub paths under the assembly folder.  Well, with a bit more mucking about, and individually deploying each I copied the GAC paths, trimmed them down to just the assembly specific portion and placed them in the ‘Location’, and to my delight, it worked.  Even though they were both going to a different GAC (C:\Windows\assembly\GAC_MSIL\NLog and C:\Windows\Microsoft.NET\assembly\GAC_MSIL\NLog), both deployed to the right place when I explicitly provided the location with include the version/public key info.

The NLog 2.0 assembly reference in the SharePoint Package > Advanced > Additional assemblies had “NLog\2.0.0.0__5120e14c03d0593c\NLog.dll” as its location, and the NLog 3.2 assembly reference had “NLog\v4.0_3.2.0.0__5120e14c03d0593c\NLog.dll” as its location.

The Package.package file contents looked like this…

<assemblies>

    <customAssembly location=”NLog\2.0.0.0__5120e14c03d0593c\NLog.dll” deploymentTarget=”GlobalAssemblyCache” sourcePath=”..\Dependencies\NLog_2.0\NLog.dll” />

    <customAssembly location=”NLog\v4.0_3.2.0.0__5120e14c03d0593c\NLog.dll” deploymentTarget=”GlobalAssemblyCache” sourcePath=”..\Dependencies\NLog_3.2\NLog.dll” />

</assemblies>

So its odd, but what I took away from this is that it looks like the packaging process tries to validate the Locations, looking for possible conflicts, not taking into account that the fully qualified paths when they go into the GAC (regardless of the old or new GAC) will be different by version and key.  Manually putting the versioned/keyed path, makes them unique and deploy as normal, and once the next version of NLog comes out, I expect to see it in the new GAC along side the NLog 3.2 version, and the NLog 2.0 version still in the old GAC, all deployed from one WSP.


Update…

Adding the latest NLog 4.0 to by package also worked with the addition of the additional assembly.  The default location also worked, as it didn’t conflict since the other two were explicit, but I expanded it myself as well and so the Package,package file also contained this line:

<customAssembly location=”NLog\v4.0_4.0.0.0__5120e14c03d0593c\NLog.dll” deploymentTarget=”GlobalAssemblyCache” sourcePath=”..\Dependencies\NLog_4.1.2\NLog.dll” />

And I now have all three NLog assemblies deploying, one to the old GAC and two to the new GAC all from a single WSP.

Importing PowerShell Modules from GAC DLLS in order to use their CmdLets

What I am writing here is nothing new, just a consolidation of attempted answers I found when trying to do as the title of this post describes.  Now to set the stage, this is a DLL deployed as part of a SharePoint solution which is why it gets dropped to the GAC and then the CmdLets are used to configure the solution once the SharePoint WSPs are deployed.

Import-Module will take a full path to a DLL in the GAC or anywhere in order to use the CmdLets within it, but I didn’t want to be mucking with resolving the GAC path for a DLL, I just wanted to give it the assembly name without version and what not.  In looking at other people comments to loading DLLs with partial name, the obvious “LoadWithPartialName” was mentioned, however that will load the DLL into memory, but doesn’t load them as modules for CmdLet use…

Chaining the two commands together does work nicely though:

$assembly = [System.Reflection.Assembly]::LoadWithPartialName(‘MyProject.PowerShell’)

Import-Module -Assembly $assembly

New-ObjectToPerformTask -Url $url

It’s nothing special, but a nice complete way to load PowerShell cmdlets using partial names from the GAC, so its great for SharePoint deployed DLLs.

 

SharePoint… smart enough to adapt but too dumb to tell you

We came across something recently that is sort of interesting and extremely annoying.

So it seems that when adding a new sub folder to a SharePoint Document library (either through the UI or code), if the folder name ends in “_file”, “_files”, “.file”, or “.files” then SharePoint will automatically tack on an “_” (underscore) to the end of the folder name.  I guess internally they use those patterns in names for something, just like prefixing a folder name with “_” is supposed to hide it so it seems.  Anyway, so they tack on the underscore to prevent it matching the controlled pattern they use… that’s the interesting bit, but its been that way for a long time.

Normally a user might not even notice or care when looking in the UI since it might blend in, but we have a very clever data deployment system, that will create folders on the fly to match the XML data structure we give it for publishing documents and such.  This works great as we get the root SPFolder and drill down the folder path creating sub folders as we go.  The code then returns the folder path as it created it to give us a nice normalized string, and finally we add a document with that final folder path.  In the code, to be on the safe side, we rebuild the folder path to adapt to any changes SharePoint might have made when calling spFolder.SubFolders.Add(folderName).  We simply take the Names of the newly created SPFolder objects and join them together as we create the path… seemed like a sound plan.

This is where it SharePoint screwed us up recently, when testing some customer document loading.  If a name like “TEST_file” is passed in SharePoint will create the folder, appending the additional “_”, but will then return a new SPFolder object with the original name we asked for, and not the new name it actually created it with.  So basically it did one thing, then told us it did another.  The path was created, but we never new of the change and when uploading the document with a URL based on what SharePoint told us it did, the adding of the document failed, complaining the path didn’t exist.

Just a nice example of even when you try to adapt to what an external component (SharePoint in this case) might do, you still can’t trust it, cause it can in essence “lie” :).  This is why testing with real world data is so important, because even if you write a few lines of code, the things you depend on might have millions of lines and as hard as people try, things can be missed, mistakes made, and you might just be the first person out of thousands of other developers to notice because of one little word in all that data.

SharePoint and Extension methods… super cool

I must admit that Extension methods in C# has got to be one of the coolest ideas ever for the language, for one huge reason.  They allow intelli-sense to automatically expose the methods as if they were part of the original class.  This opened up a huge opportunity for simplifying complex tasks into easily reusable bite that were automatically available once the containing assembly was referenced.  Now granted, this isn’t much different from writing custom library of methods to wrap all the heavy lifting, but extension methods put them right in front of you without needing to look them up, and that means people can’t help but use them… that’s cool.  This mean a more natural reuse of existing code, and that means more code usage which should help testing and code coverage.

Now, along comes SharePoint, a prime example of something hideously beautiful… even with all the new things added over the years its still at its heart a big customizable list manager, which lets you granularly control and render the types and views of the data in each list.  Unfortunately, like anything technology based… the simpler it is for the end user, the more complex it is for the engineers.  It’s like a self parking car… a $1 button on your dashboard probably equates to a life time of engineering and billions in R&D if you trace the evolution of all the tech involved.

SharePoint offers lots of control through a very extensive server side object model (there are also client side object models but that’s not the focus here), and everything is linked together in a nice parent child relationship of objects.  This is great for understanding the inner workings of everything, but day to day, and especially for code maintenance and robustness, its a bit of a pain.  Two of the biggest pain points were security changes (not uncommon to be a pain), and setting the data of a specific list item field (its simple to do, but just as simple in what it does).

Starting with the security activities; these were one of the most obvious to implement as an extension method since there are multiple securable objects in SharePoint.  With a couple overloads tagged as extensions, I took the logic of adding for example down from finding the user by login name, checking if an role association already existed, and if not, finding the role definitions and then the particular role definition by name, creating a role association and then adding it and updating the securable, and turned it into a single call on and SPWebs and SPLists that takes a login name and role names as strings.  This mean that even both long time and recently hired developers writing code to manipulate the security in SharePoint simply had to call spWeb.AddMemberWithRole(“someuser”, “somerole”)… it just showed up for everyone to use; far easier.  Because it was so easy to use without looking for the methods, people weren’t rewriting new version, we just used what appeared as method of SPWeb and SPList objects.  This included demo data imports and data migrations, which meant the same block of code was easily reused and easily tested everywhere under tons of edge cases.

So I realize that by passing in two strings, means that we are not able to reuse references to underlying objects that we have already found when previously calling this method, and this is true, but given that where we used these, adding one user at a time in a stateless web environment, we usually started with string data for each request anyway.  We also could string together the logic of the larger extension method into smaller extension methods on each object, with things like FindOrCreateRoleAssociation, and such, covering all our needs but so far the need hasn’t come up and its been over three years.

Now, for the most powerful extension method in the arsenal, setting the field values on items.  This extension method is probably the most used and there is a reason for it.  Out of the box, SharePoint provides a simple indexed method for setting a field value in a list item, and it works ‘fine’ for most field types… the problem is that as simple as it is, it behaves just as simple.  It doesn’t do any thinking for us, which means you need to tell it exactly what to do.  It works fine for string and number typed fields, but once you start to deal with lookups and taxonomy fields, it requires a lot of help.  I am not going to go into specifics due to confidentiality, but essentially, this easy to access extension method provides two things that SharePoint doesn’t.

First, we would pass in an object type and try to figure out how it fits based on the field type we are setting.  For example, if you enter a “123;#Hello World” for a lookup field, or a SPFieldUserValue for a user field, we would let SharePoint handle it, but if you just entered “Hello World” it could use the information available with that Field type to figure out exactly what SharePoint needs for a fully qualified value.  By providing support for Lookups, Users, DateTime (with safe range checking), and Taxonomy fields (even supporting navigating term store paths), we were able to use this method for copying data in all our event receivers, custom input screens, and all of our data migration/importing (which means that we can provide highly adaptive importing reducing migration efforts).  This means that the same consistent data type handling use used everywhere, and after a lot of edge case handling, is far smarter and adaptive than SharePoint out of the box.

The second thing it does that SharePoint doesn’t do, is provide feedback about whether it actually preformed a change.  The method, tells us if it actually made a change, so that we can track our ‘sets’ and then only perform a SPItem.Update if we actually made a change.  Why is this important? Well, even though SharePoint has the raw data, it doesn’t detect if changes actually occurred, and that means that event receivers will always fire even if no changes occurred and AfterProperties are always set for any fields touched.  By tracking field changes ourselves and only saving when required, we are able to reduce the execution of event receivers to a minimum improving performance on saves.

As usual, none of these methods themselves are special, but the use of them as extension methods with a descriptive method name means that anyone working on the code can and will use it, improving custom code reuse, and reducing the number of replicated methods and helping new developer training.  Some of the hundreds of extension methods we use, like those couple mentioned becomes so natural to call that when working on something new performing doing basic SharePoint tasks you start typing the method names out of habit, forgetting that they aren’t yet there out of the box.

Refreshing SPWeb object during enumeration

This arose for me during a PowerShell script, when I had to enumerate each web, add a field to the content types used by a particular SharePoint List in each web, and then query the list items to set the values for that newly added column.

Updating the content types (including child types) was fine, however since I had already accessed the List’s content types to find the root types, the changes made weren’t reflected in memory within the List object. Attempts to update the list items for the newly added field failed with an error that they don’t exist or may have been deleted, even if I queried the new field specifically.

Getting the List again didn’t seem to help, and calling Update on the List threw a save conflict (possibly because the List’s instance of the content types had just been updated by the hierarchical save from the parent content type).

I didn’t want to just reopen the web since I was enumerating them and that would have required alot of messy code to track where I was and use more memory doubling up the references to the web (every little bit matters).

Unfortunately SharePoint doesn’t provide a way to “refresh” the in memory objects, I guess since they usually have very short life spans.

It does however provide a Close method, but no open method. Well after a bit of decompiling I found that there are a handful of properties that will trigger the same internal initialization as opening the web the first time. I decided to try the one with the smallest footprint, Exists.

So with the code (PowerShell, but would be same in C#, etc):

$spWeb.Close()
$exists = $spWeb.Exists

I was able to make some heavy changes, then Close and reopen the web (via Exists), and run some queries and updates, all without mucking up the web enumeration (for each loop in this case).

Essentially, a SPWeb re-open procedure maintaining the same SPWeb object referenced in memory.

SharePoint 2013 – COMException 0x8007047e When Adding A Field To A List By An Event Receiver

2014/08/27 1 comment

I was trying to add a computed field to a list when ever another field (in this case a Taxonomy field) was added. This was to support custom rendering options for the taxonomy fields values since you can’t modify the JSLink options directly on a taxonomy field. Everything worked the first time, but in hardening the code I started receiving the error:

System.Runtime.InteropServices.COMException: 0x8007047e, StackTrace: at Microsoft.SharePoint.SPFieldCollection.AddFieldAsXmlInternal(String schemaXml, Boolean addToDefaultView, SPAddFieldOptions op, Boolean isMigration, Boolean fResetCTCol) at Microsoft.SharePoint.SPFieldCollection.AddFieldAsXml(String strXml)

Unfortunately there was no actual error message, just the error code, but it turns out that this is generally a save conflict error in SharePoint.

So it had worked a few times then started throwing the error, and at first I was adding the second field during the FieldAdding event, but moved it to the FieldAdded event to try to avoid any conflicts in the first place, but as you can see I still got the error so something else was happening.

Well since I was debugging I was actually providing the conditions for the error, as the taxonomy field also adds its own secondary fields and debugging slowed down the process enough to have both saves occur at the same time, and mine was generally losing out.

So in the end I set the FieldAdded/FieldDeleted event receivers to be Synchronous (via the Elements file but you could do it in code) and now my field and the taxonomy fields secondary field are added without stepping on each other regardless of debugging or not.

SharePoint 2013 – Manipulating the Term Store from a Feature event receiver

When manipulating the Term Store in SharePoint 2013, such as adding new groups or term sets, you might receive an “System.UnauthorizedAccessException: The current user has insufficient permissions to perform this operation.” error.  You can get this error, even if your user account is a Term Store administrator, when trying make those changes within a feature’s event receiver and then activating those features directly, or via an ONet.xml reference, from PowerShell.

It seems that this isn’t really a TermStore application rights issue, but instead is the SharePoint execution privileges stopping things. When you activate the feature from the front end website (site settings) it would work fine as it seems that this page automatically calls RunWithElevatedPrivileges around any feature activation.  Activating via PowerShell don’t have this call automatically, and so you might need to add this call directly into your feature event receiver.

After that, activating the feature from the web or from PowerShell will run your Term Store create code elevated and should work fine.

(PowerShell cmdlet’s don’t manifest elevation issues like this)

The Microsoft.SharePoint.Deployment SPImport class fails when calling the ImportNavigationXml method

The Microsoft.SharePoint.Deployment SPImport class fails when calling the ImportNavigationXml method.

This will be triggered whenever a user opens the Navigation page under Site Settings of a SharePoint site and clicks the “OK” button, even without making changes. Any future export and attempt to import the site will fail with a root cause being a SQL exception:

System.Data.SqlException (0x80131904): Invalid length parameter passed to the RIGHT function.

Information indicates that a assumption made in this SQL function causes a negative number to be passed in and an exception occurs.

No combination of navigation settings, including all menu items seems to resolve the issue. It seems to be one of three core navigation nodes being passed in that triggers the exception.

Errors importing SharePoint web parts and Health Monitor alerts

This is a draft, but to get the notes down I am publishing it and will update it master with the details.

We had been receiving errors alerts in the Health Monitor after upgrading NeoStream’s P4E product line to SharePoint 2013. Although the error alerts were front and center in the monitor. It wasn’t breaking anything so we ignored them.

I recently started mucking around with importing sites from PowerShell and cane across some errors that had the same smell about them.

These errors seemed situational at first, occurring for a while and then disappearing. It turns out that the error “failed to import unknown web part”, which would last for a while in PowerShell then go away, would actually go away the moment I opened a specific SharePoint web part page in the browser. Something was amiss and sharepoint corrected it when the page opened.

I ran a couple scripts to diagnose the web parts in error and nailed it down to a custom web part in the product. This webpart was pretty benign itself, and after checking that it wasn’t the configuration of the web part I cracked open the code.

The answer was immediately apparent…

Something that alot of people don’t know is that sharepoint operates in a few different processes, including the IIS worker process, the SharePoint Timer service, and (when you use PowerShell) in PowerShell. Well of these processes and more, only one of them actually has a SPContext.Current, which is created in the ASP.Net pipeline based on the http request. This means that any code calling methods or properties on the SPContext.Current which is outside of ASP.Net, will fail with an object ref error.

So what’s that got to do with web parts since they are UI and wouldn’t need to be loaded outside of the website? Well SharePoint tries to validate the registered web parts (reporting the results in the monitor) and configure the web parts when importing via PowerShell. These processes require loading the data, which sharepoint uses the binary serializer for, which actually creates an instance of an object. Now it doesn’t raise any control lifecycle events which would make up most of the code but it does call the parameter less constructor and initialize any fields.

So if you have any code called from the constructor that calls SPContext.Current, that doesn’t check for it to be null then and exception will occur during deserializing. This exception will also occur if you initialize any fields from SPContext.Current, which was the cause in this case with:

private SPWeb spWeb = SPContext.Current.Web;

So making that a read only property, solved that as those aren’t touched by the serializer.

Issues with .Net Assembly Binding Redirection between .Net 2.0, 3.0 and 3.5 to .Net 4.0 and 4.5 (relating to SharePoint 2010 and 2013)

After about three hours of uncertainty as to why it didn’t work… a last ditch test and a 3am eureka moment converged to the following:

So a bit of backstory… the company I am working for, without going into the sensitive details, was writing a console app that we wanted to work against both SharePoint 2010 and SharePoint 2013.  Well there’s lots of hacky was to do that, including two different code brancehs which then need to be kept in sync or one code branch but doubling up project files and solutions in the folders, one pointing to SP2010 and one at SP2013… We use TFS build services and so any option would work, but most a management nightmares… so on being asked I suggested we use the assembly binding redirection (just like SharePoint itself does for loading webparts from older versions), as it’s the easiest to maintain.

So we tried… and after a good number of hours by another dev, and then by myself also for three hours, so couldn’t figure out why it didn’t work.  It would compile fine, and using the FUSLOGVW.EXE we were able to see the redirect was occuring, but then it wasn’t able to find the SharePoint 2013 assemblies (version 15).  All we had was a cryptic message at the end of the fusion log about “GAC Lookup was unsuccessful”.

So we ended the day with a big WTF?!?!

Well, so about 2 mins after I logged off, I decided to try one more thing and got a build working by changing the target framework to be .Net 4.5… a step forward.  Although this helped for SharePoint 2013 all of a sudden, it stopped working for SharePoint 2010, again complaining about not being able to find SharePoint.  Odd… but it’s something.

Then at 3am in the morning, the EUREKA moment happened… I figured out ‘why’, and then the details to make it all come together. The problem is that .Net 4.0 and above is a different GAC location now (it’s now in the place it should have always been).  If the project is a .Net Framework 3.5 project then it’s looking in the wrong GAC for the SP2013 assemblies (since they are .Net 4.5).  This was the whole reason it didn’t work from the beginning; when compiled to the right target framework, you don’t even need your own the binding redirection entries since SharePoint actually includes the minimum ones as policies in the .Net 4.5 GAC.  Having made the project a .Net 4.5 project it would have worked fine on SP2013 without any config settings as it would have looked in the correct GAC.

There is a catch though that would seem to be SharePoint specific.  SharePoint 2010 actually checks to ensure the app is targetting the ‘supported’ .Net 3.5 framework.  What this means is that to run against SP 2010 the EXE needs to target a .Net v3.5, while to run against SP 2013 it needs to target .Net v4.5 (or v4.0, but v4.5 is better).  The project can reference the SP2010 assemblies and, if targeting to the correct framework, will automatically adapt for the basic Microsoft.SharePoint assembly (there is a ‘but’ here that didn’t affect our specific situation right now).

So the next question is, how do we have one project target two different framework versions… answer, we don’t.  Luckily for us, MSBuild allows us to pass in overriding values for many things, one of which being the TargetFrameworkVersion (and in support of that, the ToolsVersion if needed).  And so we can instead make two build definitions in TFS (or simply two MSBuild commands if you are using some other automated build tool), one that does the default (or we could explicitly target .Net 3.5) and the other that passes additional arguments to MSBuild to explicitly target .Net 4.5. (it might be possible that we could also do one build definition with two outputs, one for each framework also). Now when the builds are kicked off, the exact same code, project file, and solution is used, but the MSBuild overrides the version and gives us one for SP 2010 and one for SP 2013.

Here are the details of the changes to make.  Open the original solution and ensure that it’s targeting .Net 3.5, and that the SharePoint reference is pointing to the SP 2010 (v14) DLL. Create one build defintion to do the defaults, and then a second build definition and add additional MSBuild arguments (in TFS you can also add them per queued build if you wanted to test this first) of:

/tv:4.0 /p:TargetFrameworkVersion=v4.5

* Note, both build defintions would point to the same SharePoint 2010 build server, for the compile to work, and you will also need to ensure that the .net 4.5 is installed to the SharePoint 2010 build server also (this doesn’t affect SharePoint, but is required to pick the 4.5 target when compiling).

Now, when you kick off each build it should create one as a SP 2010 compatible EXE and one as a SP 2013 compatible EXE (if we wanted we could use more arguments to alter the EXE output name to make them distinct also, such as “MyToolForSharePoint2010.exe” and “MyToolForSharePoint2013.exe”).

One code base… one thing to maintain, driven by configuration.

 

Some useful references:

http://msdn.microsoft.com/en-us/library/bb383985(v=vs.90).aspx

http://stackoverflow.com/questions/12003475/override-target-framework-from-command-line