SharePoint Lookup Field Throttling Causes Missing Fields in CSOM Query

A very annoying bug appeared few weeks ago in one of our production environments with SharePoint 2013.

SYMPTOMS

You have a custom list with some lookup columns that refer to other lists. In our case the main list contained news and the lookup columns contained the classification of the news.

You add a new lookup column to the list, due to customer feedback.

Suddenly, you can’t get the list results any more by code. When you do a CSOM query, the lookup fields are lost. Only the non-lookup fields are retrieved.

CAUSE

SharePoint throttling also includes list item maximum lookup references. It is set to eight (8) lookup fields per list, by default.

Resource Throttling

This particular configuration is set on the web application throttling page in Central Administration, under the heading “List View Lookup Threshold”.

SOLUTION

If you don’t need all of the lookups at the same time, you can still make the query by choosing the fields you want to retrieve. However, in our case we needed all of the classification columns.

In this case you have two choices:

  • Increase the list lookup threshold limit to more than 8 (that’s what we did)
  • Establish a large query window, an interval during the day during which you can perform the queries

UpdatePanel Troubles with SharePoint 2013

Few weeks ago I had a nasty bug on customer premises where a combination of SharePoint 2013 and ASP.NET UpdatePanel control resulted in some very weird behaviour.

THE SYMPTOMS

Our project consisted of several full-trust code ASPX pages, dutifully placed in _layouts folder. Two of these pages had ASP.NET AJAX UpdatePanel controls to give “a kind of” smooth user interaction experience. The reason to use it was the simplicity of wrapping the existing SharePoint server controls such as grids and buttons in the update panel to prevent page reloads on postbacks. (Yes, I know it’s legacy technology but still, sometimes it’s the best and simplest solution for a given problem).

However, these pages gave random errors as if their event was somehow lost between the UpdatePanel client-side code and the server-side postback. Randomly, we had errors in our pages, with no discernible pattern. The only clue was in the ULS logs, where there were no user claims associated with the request, as if it were a non-authenticated call. What’s worse, as it had no clear cause, our reproduction procedure was to mindlessly click the buttons in the hope of a random error. Needless to say, it was way too inefficient.

Googling around, I found a way to reproduce the errors using Fiddler HTTP debugging tool. The sequence of steps is to make the first page load with Fiddler running in the background. Then, I deleted all the captured traces in Fiddler and clicked the button inside UpdatePanel. Immediately, the UpdatePanel threw the dreaded error.

THE CAUSE

According to the Microsoft KB3062825 article, the cause is the mismatch between the HTTP protocol implementation in UpdatePanel and SharePoint 2013, where instead of returning a 401 code in the handshake, SharePoint returns 200 with the login redirect and the UpdatePanel loses its changes. It only applied to a certain combination of SharePoint 2013 and .NET Framework, and even in these combination it wasn’t certain that the error would happen.

Unfortunately, we were in that situation, with the right combination of SharePoint and .NET versions.

THE SOLUTION

The solution is simple: applying a specific .NET Framework hotfix (for Windows Server 2012 R2, for Windows Server 2012) that’s meant to patch the HTTP protocol handling causing the error. The patch is available via Microsoft support, but it’s not publicly available. I think that it’s because the error isn’t deterministic enough to be included in a regular cumulative update or service pack.

You also have to install SharePoint 2013 SP1, together with a Cumulative Update from December 2014 (or later).

Galileo and SharePoint (via Azure) at SPS London 2015

Yesterday (July 11th) I was presenting at the very first SharePoint Saturday London. The SPS event was very well organized by Peter Baddeley and Seb Matthews.

Galileo with PIR sensor sending data to Azure
Galileo with PIR sensor sending data to Azure

My session was about connecting IoT to Office 365 (via Azure). I used an Intel Galileo prototyping board with a Passive Infrared sensor (PIR). The sensor data was used to determine if a meeting room was empty or occupied. The raw data is uploaded by Galileo to an Azure Notification Hub. A continuously-running Stream Analytics job is then used to translate the raw data to 1-minute resolution of the room availability and to insert this data into Azure Table Storage. Finally, a provider-hosted Office 365 SharePoint application is used to visualize the room availability.

The slides for my talk are available at SlideShare and the code is now live at GitHub.

SharePoint App Catalog And Missing Apps

Another weird SharePoint app bug happened yesterday. The solution was fairly easy once you know what’s going on, but it’s just weird altogether.

SYMPTOMS

You have a custom app in your SharePoint 2013 App Catalog.

A custom app inside app catalog under Apps for SharePoint
A Custom App Inside App Catalog

You want to add this app to a SharePoint site.  You can’t find your app in the “From Your Organization” section when you click at “Add an app” in a site.

The App Is Missing From "Your Apps"
The App Is Missing From “Your Apps”

CAUSE

I first suspected that the current user doesn’t have permissions to add an app. However, the user is the site collection administration and thus has the permission to install an app.

Yet…a slight detail. The App Catalog site is, well, a SharePoint site. With its own permissions. And, by default, containing only the user who created the catalog in the first place (the system admin).

So, the current user, although a site collection admin, doesn’t have permissions to read from the app catalog. (This is the weird part, as I expected SharePoint to do the reading using a system account behind the scenes.)

SOLUTION

Add the users that should be able to install your custom apps to the site permissions of the App Catalog site, with Read permission level. In my case it was “Dani Alves” (yes, I’m a Barcelona fan).

Adding Read Permissions to the App Catalog

Now, the app is visible in “Your Apps” when you try to add it to a site. Yeah!Custom App Is Now Visible

Modelo de apps en detalle (IV): Construyendo una app High-Trust

En la última entrega de esta serie de posts sobre el modelo de apps en detalle, hablé de las aplicaciones Low-Trust y aplicaciones High-Trust de SharePoint 2013. Pues bien, hoy vamos a meternos en la harina y hacer una aplicación high-trust (también conocida como S2S, server-to-server) desde cero.

Preparando el entorno

Servicio de perfiles

Para comenzar a crear una aplicación high-trust, necesitamos configurar varias cosas en nuestro entorno local de SharePoint 2013. (Recordáis que las app high-trust sólo se pueden tener en un SharePoint on-premise, ¿verdad?). Las configuraciones no son muchas, pero es fácil olvidarse de una de ellas y luego tendremos problemas para investigar de donde viene el error.

Antes que nada, necesitamos que nuestro SharePoint 2013 tenga activo el servicio de perfiles de usuario y que además tenga indexados los perfiles de los usuarios que vamos a utilizar en la aplicación. Esto es necesario porque el servicio de autenticación de una app high-trust necesita “encontrar” el usuario en el servicio de perfiles de SharePoint para poder ejecutar las consultas en su nombre. Si el perfil del usuario no está, la autenticación fallará.

image

Realmente, el token de acceso que envía la app hacia SharePoint contiene el identificador del usuario, y SharePoint se basa en él para saber si el usuario es válido o no, buscándolo en su base de datos de perfiles. El identificador suele ser el SID del usuario Windows, su UPN o nombre de usuario de Active Directory. Si usamos otros sistemas de autenticación como FBA o Claims, los identificadores serán otros. Es estrictamente necesario que el identificador del usuario esté presente en su perfil y que no haya repeticiones. Si os pica mucho la curiosidad, hay un excelente post de Steve Peschka al respecto.

Certificado SSL

Para poder firmar el token de la app, necesitamos un certificado SSL. Mientras desarrollamos, podemos usar un certificado de desarrollo firmado por nosotros mismos (self-signed certificate). Luego, en producción, usaremos un certificado real.

Además, para que nuestra app se pueda comunicar con SharePoint de manera segura, necesitamos que la comunicación esté encriptada bajo HTTPS. Para ello, necesitaremos otro certificado SSL con la URL de la app. Esto no es necesario en desarrollo, donde podemos relajar la restricción y usar HTTP, pero en producción esto sería una imprudencia seria.

Para crear un certificado autofirmado, iremos a la consola de IIS y bajo el apartado “Server Certificates” y dentro de él la opción “Create Self-Signed Certificate“. Le daremos el nombre CertificadoHighTrust.

imageimageimage

Al final, exportaremos el certificado incluyendo la clave privada. Como contraseña le pondremos “password“. Al final, tendremos un fichero PFX con el certificado digital que usaremos en nuestra app. Este fichero tiene que estar en una carpeta accessible desde Visual Studio. En nuestro caso, como estamos desarrollando en una máquina de SharePoint, no tenemos que mover el fichero y lo tendremos en la ruta C:\Certificates\CertificadoHighTrust.pfx.

imageimage

También haremos una exportación del certificado sin la clave privada, para obtener el fichero CertificadoHighTrust.cer. Para ello, tenemos que ir a “Server Certificates” dentro del IIS, abrir el certificado y en la pestaña “Details” ir a la opción “Copy to file” indicando que no queremos la clave privada.

imageimageimage

Ahora vamos a comprobar los permisos necesarios para que SharePoint pueda procesar nuestros certificados. Los requerimientos son dos:

  • El application pool SecurityTokenServiceApplicationPool tiene que tener permisos de lectura sobre la carpeta de los certificados
  • El application pool de la aplicación web en la que instalaremos la app (en nuestro caso, la del puerto 80) tiene que tener permisos de lectura sobre la carpeta de los certificados

En nuestro caso, son las cuentas SPFarm y SQLSvc. Les daremos los permisos correspondientes en la carpeta Certificates.

imageimage

Ahora tenemos que hacer que SharePoint reconozca nuestro certificado. Abrimos una consola PowerShell de SharePoint y registramos el certificado como de confianza.

image

Configurar trusted issuer

Una vez que SharePoint se fía de nuestro certificado, vamos a proceder a configurar lo que se conoce como un “emisor de confianza“. Esto no es más que indicarle a SharePoint que los tokens firmados por un “emisor de confianza” son de fiar. Y, ¿cómo sabe SharePoint qué un emisor es de confianza? Primero, el ID del emisor (un GUID que va dentro del token) tiene que existir en la configuración de SharePoint. Segundo, el token tiene que estar firmado por un certificado del que SharePoint se “fía” porque tiene su parte pública. Como esta parte del certificado la hemos hecho ya, sólo falta decirle a SharePoint el ID de nuestro proveedor de confianza. Puede ser cualquier GUID, y aquí vamos a utilizar el aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee (si usamos letras en el GUID, tienen que ser en minúscula). Bonito y fácil de recordar, ¿verdad?

Para registrar nuestro emisor de confianza, hay que ejecutar el siguiente código en PowerShell, a continuación del script de importación del certificado:


 

Ya podemos proceder a desarrollar la app, pero antes de esto vamos a permitir el uso del certificado autofirmado relajando los permisos de autenticación. (ojo: esto se puede hacer sólo en los entornos de desarrollo, nunca en producción).


 

Desarrollando la app

La app necesitará el certificado SSL y conocer la contraseña de su parte privada. Además, la cuenta bajo la que se ejecutará la app (el application pool del IIS) tiene que tener permisos para acceder a la ubicación del certificado.

Abrimos Visual Studio 2013 y creamos una app de SharePoint 2013. Al salir el asistente, le indicamos que queremos una app provider-hosted y que la identidad de la app se establecerá mediante certificado.

imageimageimage

Ahora tendremos una aplicación (en mi ejemplo, creada con Web Forms) que muestra el nombre del sitio actual de SharePoint donde está instalada la app. La solución consiste en dos proyectos: el proyecto de la app de SharePoint y el proyecto web donde está la lógica de la app.

image

El código que hace la llamada a SharePoint es muy sencillo:


 

 

Como se puede ver, el contexto de SharePoint se establece usando la clase auxiliar TokenHelper con el método GetS2SClientContextWithWindowsIdentity. Esta llamada obtiene un contexto de high-trust app (S2S, server-to-server) usando la identidad del usuario Windows que está ejecutando la aplicación. Esta es la configuración por defecto, pero se puede modificar para usar la identidad federada por ejemplo.

Ejecutando la aplicación, nos sale el diálogo de otorgar permisos a la aplicación, y al aceptarlo, podemos ver el título del sitio de SharePoint, “Home”.

imageimage

Dentro del TokenHelper

Vamos a ver como nuestra aplicación construye el token. Si miramos el método GetS2SClientContextWithWindowsIdentity, veremos que su cuerpo tiene 4 lineas de código.

Primero se obtiene el dominio (realm) de la aplicación. Acto seguido se obtiene un JWT (JSON Web Token) que contiene los claims del usuario actual de Windows. Una vez que tenemos el token JWT, lo empaquetamos en un token de acceso con el método GetS2SAccessTokenWithClaims. Al final, el token de acceso lo intercambiamos por un contexto cliente de SharePoint.
La parte interesante es ver como se hace el token. Si miramos el método GetS2SAccessTokenWithClaims, veremos que acaba en un método IssueToken, que está construyendo el token de acceso.

 

El siguiente artículo de MSDN explica las partes del token de acceso que construye nuestra aplicación. En esencia, se construye un token de “actor” que identifica el usuario actual. Este token de actor se hace con los claims del usuario creados previamente. El token se expide para la aplicación actual (parámetro “aud“) y lo firma nuestro certificado (propiedad SigningCredentials). Este token interno se rodea de un token externo que no está firmado digitalmente.
Como se puede ver, el misterioso “token de acceso” es nada más que una cadena de texto con datos en formato JSON que describen una identidad de aplicación y de usuario.
La última parte “misteriosa” es saber como obtiene SharePoint el objeto ClientContext a partir de un token? Es muy sencillo: adjunta el token de acceso en la cabecera de la petición a la API, y al volver ya estará inicializado correctamente el contexto. Ahora lo veremos con Fiddler.

Las pruebas con Fiddler

Si abrimos Fiddler para ver el tráfico HTTP entre la aplicación y SharePoint, veremos que la aplicación hace una llamada a la API CSOM (/_vti_bin/client.svc/ProcessQuery). Si miramos la petición, en las cabeceras veremos un parámetro llamado Authentication con el valor “Bearer: ” seguido de un texto codificado en Base64. Este es nuestro token de acceso.

image

Si usamos alguna herramienta como JWT.io para descodificar el token, podemos ver su estructura.

image

Para más información sobre la estructura del token, hay un magnífico post de Kirk Evans al respecto.

Conclusión

Espero haber desmitificado un poco el mundo de las aplicaciones High-Trust con este post. Como veréis, nos permite usar el modelo de apps sin tener que estar en la nube, lo que es un paso importante para poder adaptar nuestros desarrollos a los escenarios híbridos que parece que serán mucho más habituales en el futuro.

¿Habéis trabajado con este modelo? ¿Podéis compartir vuestras experiencias? ¡Los comentarios os esperan aquí mismo, debajo de este post!

SharePoint Search Alerts and the Case of Only 10 Results

A puzzling SharePoint search alert behaviour was keeping our team of three scratching our heads for days. If you speak Spanish, you can check my teammates’ blog posts about this same error: Ignasi and Miguel.

SYMPTOMS

You have a SharePoint search configured correctly in SharePoint 2010/2013. You search for a keyword and create a search alert for the results.

image

You make more than 10 changes that should trigger the alert. However, the search alert email you receive lists only 10 changes. The rest of the changes are not sent as alerts.

CAUSE

The Search Alert mechanism in SharePoint 2010/2013 is described in great detail in the following MSDN article. In a nutshell, the process is as follows:

  1. The alert is invoked once a day ("daily summary") or once a week ("weekly summary").
  2. The alert runs the query again with the user-supplied search criteria (in my example case: "Hello").
  3. If there are no new results (the results that are more recent than the last time the alert was run), the alert finishes.
  4. If there are new results, they are formatted according to the search alert email template and sent to the user.

There is one tiny bit of missing information here. The step 2, where the search query is run again, has a hidden parameter that limits the number of results that are returned from the search index. If you look inside the SharePoint code that processes the alert, you will find a RowLimit parameter supplied to the query:

image

This searchAlertNotificationQuota is a property that is ultimately established as AlertNotificationQuota property at the SearchApplicationService object in the Sharepoint server object model.

SOLUTION

Run a PowerShell script to update this property in the Search service application and set a  number of returned search results to a value of your convenience. Let’s say 75:

Exposing BLOB Data in Child Entities With Business Connectivity Services

Another interesting issue arose the last week. I was tasked with implementing a BCS .NET connector to a OTRS web issue tracking service, as I mentioned earlier. The icing on the cake was extracting the binary data (issue attachments and images) and showing them in SharePoint leveraging Business Connectivity Services (BCS) in SharePoint.

I found out a post on how to use BCS to expose SQL Server data, which was not applicable in my case. I also had the extra difficulty of having the attachments in a child entity. The OTRS Ticket was the primary entity in my model, with OTRS Article being a child entity with 1:N relation in between). The attachments were properties of the article in OTRS but in the model I attached them to the Ticket in order to be more accessible.

So, I struggled to build a model that had to comply with 2 goals:

  • expose entities fields with BLOB data (Attachments and their binary data)
  • the BLOB entities should be on the child side of the relation

In this post I will show you how to achieve these goals step by step with Visual Studio.

What we want to achieve?

This is the model that I’d like to end with. It has a main entity called Product and a set of child entities called Photo. The child entity has a compound key of both the ProductId and it’s own PhotoId. It also has a binary field called PhotoData together with MIMEType and FileName fields that will govern how to expose the photo to the browser.

Those are the minimum three components for binary data BCS compatibility: MIME type, binary content and a file name.

image

We will model these two entities in a custom NET assembly connector. For brevity, I will fake the external service and return hardcoded data read from picture files embedded inside the connector.

Building the Connector

The first step is to create a new Business Data Connectivity Model named ProductModel in Visual Studio.

image

Visual Studio will create a new entity called Entity1 and will implement the entity sample methods X and Y, together with a NET implementation of the entity. The main problem with BCS development is that the metadata has to match hand-in-glove with the implementation, and also the BCS metadata has to match internally.

image

We will begin filling the Entity1 and changing its name to Product. We’ll also change the name of the model from BdcModel1 to ProductModel. After the renaming of several files and nodes in the BDC Explorer, we’ll have something like this.

image

As you can see, the Product entity has ReadItem (gets a product by its ID) and ReadList methods (gets a list of products). The methods are declared in the BDC model (left side) and their code will reside in ProductService.cs class.

We will model the Product entity operations first, as every change in the model triggers a change in the code that is generated in ProductService class. First, we’ll change the Identifier1 field of the Product entity into a ProductId of Int32 type.

image

Modeling Mayhem

Then, in the BDCM editor we’ll select the methods and fill their details in the BDC Method Details pane. This is the tricky part of modeling BDC: it’s very easy to do this wrong. Luckily, the BDC Explorer lets us copy and paste metadata to save time. First, we will model a ReadList operation. It will take one no parameters and will return a "Return" parameter of type "Collection of Product", which will be our entity metadata. Take a look.

image

When we edit the metadata, we have the following BDC Explorer tree:

image

Here we have to change the type and the name of Entity1List, Entity1, Identifier1 and Message. They should be: ProductList, Product, ProductId (Int32) and ProductName (String). The change is done in Properties window (F4) and we should change the Name and Type Name property. When changing the Type Name you have to choose the entities from "Current Project" tab. For collections (such as ProductList) you should select the entity and check "Is Enumerable" list.

 image image

Note: when modeling the ProductId, you also have to specify that that property maps to the Identifier (and the entity that it refers to, i.e. Product).

image

Now, we have to change the same thing (without the collection, of course) for the ReadItem method. It should take one parameter (mapped to String) and return a Product. The good news is that we can copy and paste the Product node from ReadList into ReadItem method.

image

The underlying code class Product.cs and the service ProductService.cs have to be changed to include "hardcoded" data:

At this moment we have a workable connector that exposes products and product details, but nothing else. We will do a quick check by deploying the connector and creating a new external list in SharePoint.

image

image

image

Well done! Now we have to model the photos 🙂

Adding the Photo Entity to the model

First, we’ll add a new class to the project, with the following simple properties:

Then, we add a entity in the BDCM model canvas right-clicking it and choosing "Add / Entity":

image

image

Of course, we have to change the name and add the properties of the entity. We have to add both identifiers, the PhotoId and the ProductId. They both have to refer to the Photo entity, and in the association we will let BDC know that it will provide the value of ProductId when the association is navigated.

I have also added a ReadItem method.

image

Even if the association is necessary, you still have to model the ReadItem method in advance and add an instance of that method, which should be of SpecificFinder type. We will take 2 In parameters with the 2 identifiers of the Photo entity and we will return an instance of Photo class, with all its fields.

image image

We’ll add the association between Product and Photo entities now, right-clicking again on the BDCM canvas:

image

In the dialog, we’ll make sure that the association is correct and in this case we will only have the navigation from Product to the Photo, not the other way around. We’ll remove the extra navigation method (the last one) and we will uncheck the Foreign Key association, as the ProductIds are returned in the code for the association method in the Product class.

image

Now we have a new method called ProductToPhoto in the Product entity that returns a list of photos for that product.

image

We still have to do the "boring" stuff of mapping the return types in the BDC Explorer pane:

image

After that, we have to write the code for the ProductToPhoto method. At the moment we won’t be showing the photo yet, so we can set the BLOB array to null.

Ready to roll! Deploy the solution to SharePoint and create External Content Type Profile pages in the BDC Service Application (Central Administration). It will automatically add the related Photos to the Product in its profile page.

We have to delete and recreate the external list. Now we can go to the View Profile action and see the details of the product and its photos:

image

Reading the photos

The only thing missing is the link to see the actual photo (the BLOB content). We have to add a StreamAccessor method and a method instance.

We can’t add this method in the entity designer. We have to open the BDCM file as an XML file and then add the Method and MethodInstance nodes to it.

image image

We will add our method under the existing ReadItem method:

image

The XML snippet to insert is this one:

As you can see, we return a Stream with the data. We have two additional instance properties that specify which entity property is the MIME type and which one is the file name.

Check the mappings: both identifiers should be mapped to Photo entity and both as parameters and return values in the entities (for ReadItem method). If not, it will complain in runtime about "Expected 2 identifiers and found only 1". It took me some time to solve that one!

In our PhotoService.cs class we have to add the method that returns a Stream with the data. In my case I use a Base64 string with a small sailboat image in PNG format, using the excellent web site that allows you to encode the image into a string http://www.base64-image.de/step-1.php. I use the Convert .NET class to convert that string into the original array of bytes. (In this snippet I have shortened the string for legibility):
 

Deploy again to SharePoint, rebuild the external content type profile pages and it’s done!
 
image
image
 
The complete code for this example is available on my SkyDrive.

How to Enable Custom JavaScript on MDS Pages in SharePoint 2013

If you have custom JavaScript file loaded in your master page, as we usually do in SharePoint, you might have stumbled upon the problems with custom JS and SharePoint 2013 new Minimum Download Strategy (MDS).

By default, MDS is enabled on Team Sites in SharePoint 2013 and allows for refreshing data on SharePoint pages without causing reloads. But, in order to do so, all the content and JavaScript in MDS pages must play along nicely. If not, the symptoms include:

  • blank page on loading custom JS in a MDS-enabled page
  • custom JS script not loading

The solution

The first part is to use ScriptLink control in the master page, instead of using script tags directly. Specify "LoadAfterUI" attribute in order for the script to be loaded after the page is loaded in MDS.

<SharePoint:ScriptLink language="javascript" ID="Whatever" name="~sitecollection/Style Library/js/yourcustom.js" OnDemand="false" LoadAfterUI="true" runat="server" Localizable="false" />

The second part is to encapsulate all your custom JS in a single function and call it from your custom code. Your yourcustom.js file should look like this:

function $_global_customjs(){
    _spBodyOnLoadFunctionNames.push(‘DoSomething’);
}
    var DoSomething = function ()
    {
        // — Your custom JS here
    }
$_global_customjs();

BDC Visual Studio Project and Missing Assembly Trouble

I just had a strange error the other day, deploying Business Connectivity Services (BCS) model arranged around a NET assembly. When accessing the external list data, I found the following error:

Assembly was requested for LobSystem with Name 'Namespace.LobSystem', but this assembly was not returned. SystemUtility of Type 'Microsoft.SharePoint.BusinessData.SystemSpecific.DotNetAssembly.DotNetAssemblySystemUtility' requires the assembly to be uploaded.

Of course, I checked the assembly and it was loaded in the GAC. So, where’s the error coming from?

Well, our friend BCS registers the assemblies for your external content type when you activate the feature containing your BCS Model and Assembly. This feature is made automatically when you create a new BCS project in Visual Studio. The feature has a custom feature receiver and also has a custom entry in the feature.xml declaration.

<Properties>
  <Property Key="GloballyAvailable" Value="true" />
  <Property Key="IncrementalUpdate" Value="false" />
  <Property Key="ModelFileName" Value="YourModelYourModel.bdcm" />
  <Property Key="BdcModel1" Value="BdcAssembliesYourAssembly.dll" />
</Properties>

It has to match the name of the LOB System in the BDCM file (the entity model):

<LobSystem Name="LobSystemName" Type="DotNetAssembly">
  <LobSystemInstances>
    <LobSystemInstance Name="LobSystemInstance" />
  </LobSystemInstances>

My error was renaming the model in some point of time. It went well for the model XML, but the old name ("BdcModel1") was still remaining in the feature.xml. After manually editing the feature.xml and pointing it to the new name of the LOB System, the error was gone:

<Properties>
  <Property Key="GloballyAvailable" Value="true" />
  <Property Key="IncrementalUpdate" Value="false" />
  <Property Key="ModelFileName" Value="YourModelYourModel.bdcm" />
  <Property Key="LobSystemName" Value="BdcAssembliesYourAssembly.dll" />
</Properties>

Configuring Content Organizer Rules with PowerShell

As you probably know from my previous posts, I have been configuring a wide-scale document management solution using Content Organizer feature of SharePoint. The idea is to use managed metadata to tag the document with information about the business unit and region it originates from (in my case Region and Section metadata columns) and let SharePoint classify it to the correct site and document library. I wrote about how to expose the cross-site content organizer hubs a few months ago.

As I had many regions and many sections to configure, I had to manage to build the whole hierarchy with a PowerShell script instead of doing the work by hand. In this post I will share with you the things I learnt by doing so.

Anatomy of the Content Organizer Rules

When you click the Content Organizer Rules in Site Settings, it will show you the contents of a hidden list called "Content Organizer Rules". You can see it adding /RoutingRules to the site URL.

image

Each rule is a list item with several fields of importance:

  • RoutingEnabled: this column should be set to 0 or 1 in order to disable or enable the rule.
  • RoutingPriority: a number from 1 to 10. 1 is the highest priority and 10 is the lowest one. A rule with higher priority will run BEFORE any rules with lower priority.
  • RoutingRuleName: a string with the rule name.
  • RoutingContentType: if the rule applies to a specific content type, here you should put the content type name (not the ID).
  • RoutingContentTypeInternal: this is the content type ID for the content type specified in the RoutingContentType field concatenated with the content type name using the pipe ‘|’ character as separator
  • RoutingRuleExternal: if the content organizer rule should route to another site, this field should be set to 1. Set it to 0 if the routing is done in the same site as the rule.
  • RoutingTargetLibrary: the destination library if the document is routed in the same site.
  • RoutingTargetFolder: the destination folder if the folder classification is used.
  • RoutingTargetPath: if you are routing to another site, this should be the name of the content organizer source (set in Central Administration).
  • RoutingConditions: this is an XML string with the routing condition. The XML syntax will be explained later.

In order to programatically create these rules, SharePoint server object model exposes a class named EcmDocumentRouterRule. This class simply surfaces the underlying list columns as class properties.

In order to create a new rule, just instantiate the EcmDocumentRouterRule passing the SharePoint site for the rule in the constructor (the SPWeb object). Populate the properties and call the Update() method on the rule. Job done!

Rule Condition XML

The most complex part of the creating the rule in code is how to correctly construct the condition XML. It should be in the form of:

<Conditions>
  <Condition Column=’column’ Operator=’operator’ Value=’value’>
  </Condition>

</Conditions>

All the children nodes of Conditions are evaluated together (i.e. it’s an AND of all of them). Each condition specifies the Column that is evaluated, the operator and the value to compare against.

The column is comprised of several pieces: field GUID|field internal name|field display name.  (in my case it was ‘5bc078e1-bcf6-4475-aadf-2b567726c696|Region|Region’)

The available operators are:

  • IsEqual
  • IsNotEqual
  • GreaterThan
  • LessThan
  • GreaterThanOrEqual
  • LessThanOrEqual
  • BeginsWith
  • NotBeginsWith
  • EndsWith
  • NotEndsWith
  • Contains
  • NotContains
  • EqualsOrIsAChildOf
  • NotEqualsOrIsAChildOf
  • IsEmpty
  • IsNotEmpty
  • ContainsAny
  • ContainsAnyOrChildOf
  • ContainsAllOrChildOf

The value is the literal value to compare against. If you are comparing against managed metadata columns, the value specified as ’16;#Austria|1953bfe9-95d0-4ec8-8b9f-7a58169a9a53Region’. The left part is the underlying lookup value for the managed metadata field and the right part is the Term ID for the selected term.

Note: all the managed metadata columns in SharePoint are implemented as lookup columns to a site-collection root site hidden list. Every time a new managed metadata value is added to a list item, SharePoint adds a new entry to this hidden list. The consequence of this implementation is that you must get the lookup value IDs to make a CAML query against managed metadata using TaxonomyField.GetWssIdsOfTerm method.

Putting it all together

So, let’s say that we have the text value of the managed metadata column and we want to create a content organizer rule that will route the document with the content type "My Content Type" to a different site when the value of that column matches our text value or its children (in my case the column name and the term set name is Region and its value is ‘Spain’). How should we construct our PowerShell script to do it?

First of all, we have to retrieve the Term ID that corresponds to the taxonomy node for the term set "Region" with the value of "Spain". In the $web variable we have the SPWeb object for the site we want to create the rule for.

$regionValue = ‘Spain’
$ts = Get-SPTaxonomySession -Site $web.Site
$tstore = $ts.TermStores[0]
$tgroup = $tstore.Groups["Group Name"]
$tset = $tgroup.TermSets["Region"]
$term = $tset.GetTerms($regionValue, $true)
$termValueGuid = $term.Id

Now we have the term ID that corresponds to ‘Spain’ entry in the ‘Region’ term set in the ‘Group Name’ term set group. Now we need to construct the full literal value of the managed metadata column (the lookup part and the GUID part). In order to do so, we use the TaxonomyFieldValue.PopulateFromLabelGuidPair method. It consists of the text value (‘Spain’) and its Guid separated by the pipe character (‘|’).

$docLib = $web.Lists["Document Library Name"]
$regionField = [Microsoft.SharePoint.Taxonomy.TaxonomyField]$docLib.Fields["Region"]
[Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue]$taxonomyFieldValue = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue($regionField)   
$taxonomyFieldValue.PopulateFromLabelGuidPair([Microsoft.SharePoint.Taxonomy.TermSet]::NormalizeName($region) + "|" + $termValueGuid)

Now we can create the content organizer rule. Remember that the column name for the condition XML is the field Guid, static name and display name. In this case we route the document to another content organizer called ‘Spain’, that’s why the rule is marked as External. The value for the taxonomy field is retrieved using the ValidatedString property of the TaxonomyFieldValue object.

[Microsoft.Office.RecordsManagement.RecordsRepository.EcmDocumentRouterRule]$rule = New-Object Microsoft.Office.RecordsManagement.RecordsRepository.EcmDocumentRouterRule($web)
$rule.ConditionsString = "<Conditions><Condition Column=’" + $regionField.Id + "|Region|Region’ Operator=’EqualsOrIsAChildOf’ Value=’" + $taxonomyFieldValue.ValidatedString +  "’></Condition></Conditions>"
$rule.Name = $region + " rule"
$rule.ContentTypeString = $web.AvailableContentTypes["My Content Type"]
$rule.RouteToExternalLocation = $true
$rule.Priority = "5"
$rule.TargetPath = $regionValue
$rule.Enabled = $true
$rule.Update()

 

Summary

I hope that this code snippet can save you some time if you create a lot of content organizer rules and want to avoid doing it by hand.