Lightswitch Custom Validation Refresh

I found an odd problem this week in a SharePoint-enabled Lightswitch HTML Client application. I am sharing the solution here in case there is somebody out there with a similar situation.

SYMPTOMS

You have a Lightswitch HTML application (banner definition in my case). You use custom validation function in beforeApplyChanges function.

myapp.AddEditBanners.beforeApplyChanges = function (screen) {
var hasErrors = false;
if (screen.Banner.EndDate && screen.Banner.EndDate <= screen.Banner.StartDate) {
screen.findContentItem("EndDate").validationResults = [
new msls.ValidationResult(screen.Banner.EndDate, "La fecha de fin tiene que ser posterior a la fecha de inicio.")
];
return false;
}
};

Your validation works! However, when you try to correct the field that is being validated, the Save button doesn’t work anymore and it keeps the custom validation message on the screen (it says in Spanish that the end date has to be later than the start date).

image

Furthermore, you get an error message that says “Cannot continue. There are validation errors on the page”.

image

CAUSE

Apparently, Lightswitch code generator attaches the validation refresh on control events such as focus, blur, keyup etc, but only on default LS validations (data type, length, required fields and similar).

Custom validations on beforeApplyChanges are not refreshed on data changes in the controls and due to the page being in invalid state, the Save button doesn’t trigger the beforeApplyChanges function again. It’s a kind of a subtle bug in LS code generation that causes deadlock: we can’t save because the validation isn’t firing and it isn’t firing because we can’t save.

SOLUTION

Well, the best solution would be for Lightswitch to expose an additional hookup point for custom validation, one that should be refreshed automatically with any control change. However, it is improbable that it would ever be included in Lightswitch code.

The available solution for now is to register a change listener for those properties in our screen that trigger custom validation. Once the value is changed, we remove the validation errors for that field if there are any. This ensures that the Save button will work, triggering the custom validation logic again. We can hook up the change listener in the screen created event.

myapp.AddEditBanners.created = function (screen) {
function onBannerNameChange() {
if (screen.findContentItem("BannerName").hasValidationErrors()) {
screen.findContentItem("BannerName").validationResults = [];
}
}
screen.Banner.addChangeListener("BannerName", onBannerNameChange);
};

However, we have to be careful to unregister the change listener when the screen is closed. This technique is explained in this MSDN Forum post by Huy Nguyen.

screen.details.rootContentItem.handleViewDispose(function () {

screen.Banner.removeChangeListener("BannerName", onBannerNameChange);
});

What I Learned at SharePoint Saturday Stockholm

As I mentioned in my previous post, last weekend was well spent with the SharePoint Saturday crowd in Stockholm, Sweden. In this post I want to highlight what I have learned from it. It was very difficult to choose which session to attend , as I had my own session to worry about.

What I learned at SharePoint Saturday Stockholm 2015

JavaScript

There were two very good sessions about JavaScript in SharePoint.

The first one was called “SharePoint JavaScript, Doing It Right” and it was presented by Hugh Wood, very knowledgeable guy from Rencore (by the way, they are the makers of SPCAF, pay them a visit). He summarized his detailed posts on SharePoint JS inner workings. If you haven’t read them before, you don’t know what are you missing.

The other session was called “Mixing power of CSR with flexibility of KnockoutJs: three examples” and it was presented by Andrei Markeev. I couldn’t attend it but I spoke with Andrei and I am really looking forward to see the slides and the sample code. Andrei’s GitHub repository has some very good things to have in SharePoint arsenal: a CSR live tool (Cisar) and CAML JS console.

Workflows

The second thing I learned in Stockholm was just how awesome is the new workflow development experience with Visual Studio. SharePoint 2013 and Office 365 workflows use the new, external engine and all the calls are client-side code, either JS in SharePoint or REST API on your servers.

The session was called “Advanced SharePoint Workflow Scenarios” and was flawlessly presented by Paolo Pialorsi from PiaSys. If you haven’t heard about Paolo, which is very doubtful, he is the guy who wrote The Book on SharePoint Development and he’s one of the brightest SharePoint people I know. Check his slides and source code, there is a trove of information there. Grazie, Paolo!

Full-Trust Code to SharePoint Apps Transformation

Another session I liked was called “Transforming SharePoint Farm Solutions to the App Model” and presented by another Rencore genius and one of the founders of SharePoint Saturday Stockholm, Matthias Einig. He outlined the SharePoint development roadmap and the main obstacles from moving to the cloud. I think that it was a sober assessment of the state of SharePoint development right now.

 

Were you at SharePoint Saturday Stockholm 2015? What did YOU learn?

SharePoint Code Maintainance and Unit Testing Talk at SPS Stockholm

Last weekend I have attended another SharePoint Saturday, this time in Stockholm. The weather was nicer than the last year, and the attendance was over the top: 300 attendees to learn about and discuss SharePoint topics from the expert speakers.

The speaker lineup for SharePoint Saturday Stockholm

My talk was about how to build SharePoint code that’s both maintainable and testing-friendly. I have approached the talk from the coarse to finer detail, from the distribution of the solution components into SharePoint to the dependency injection mechanisms and principles of SOLID and GRASP object-oriented design.

Me speaking about maintainable and testable SharePoint codeThe attendees at SharePoint Saturday Stockholm

The demo I shared with the attendees was a very simple SharePoint provider-hosted app in ASP.NET MVC that displays the user full name and the login username. I started with a very coupled code that all sat in the Index action in the controller and I ended with a testable design that had the SharePoint ClientContext dynamically injected to the service at runtime, dutifully abstracted behind an interface. I used Unity IoC container and Moq for mocking the dependencies in the tests.

You can find the slides at my SlideShare page, as usual, and the source code for the demo (in a form of a Mercurial repository) on my OneDrive. I have received some very good comments about the topic and the techniques mentioned in the session.

If you have attended my session at SPS Stockholm, I have questions for you.

  • What have you liked most?
  • What haven’t you liked at all?
  • What could be done better? 

Let me know how can I improve the talk for future events.

What to Expect from SharePoint 2016

Well, it has been doubted lately but Microsoft has finally spilled the beans on the next on-premise version of SharePoint. Office General Manager Julia White has posted the news under the title “Evolution of SharePoint” on the Office Blogs few days ago and it has stirred the SharePoint blogosphere quite a bit.

I won’t repeat Julia’s post but I would like to comment a little bit on the implications of the SharePoint 2016 as it has been described in it. In one of my prior blog posts I speculated about the future of SharePoint, and I can see that I haven’t been too misguided.

The SharePoint Continuum

One of the most striking things in Julia’s post is this diagram that illustrates the new SharePoint landscape. It has “blue” on-premise pieces and “orange” cloud pieces, connected with the arrow that says “Hybrid”. I have named it “The SharePoint Continuum” (but without Q, for these Trek-leaning among you)

Main-graphic-new-020215

For me it’s one of the most important clarifications in Microsoft messaging to the SharePoint customers in the last couple of years. It shows that Office 365 is not there to replace, but to complement your on-premise capabilities. It took me some time to get this message clear, and I can imagine how out-of-the-blue statements and unclear announcements made this easy to understand for the IT business decision makers (sarcasm intended).

So, now that we know that the next SharePoint Server will be there and it will connect, let’s make an educated guess about what it will bring to us. (Later, when I re-read this post in the light of Ignite announcements, I will check how clear my crystal ball was in making it).

What should IT Pros expect?

I am absolutely sure that IT Pros will have A LOT to learn about how to seamlessly integrate SharePoint Server 2016 with the experiences offered by Office 365. Now that Azure Active Directory (AAD) is emerging as “the” authority for hybrid scenarios for authentication and authorization, the roadmap and the guidance for enabling hybrid SharePoint experiences will become more straightforward and prescriptive. These hybrid experiences will be the lifeline that connects your on-premise environment with all the workloads that are kept “on the ground” with the exclusive experiences only available on your Office 365 tenant. Hopefully, the end user won’t see the seams in between.

Another thing that SharePoint 2016 will vastly improve is the administration, monitoring and diagnostics. This is a logical assumption to make as Microsoft has undoubtedly accumulated a vast know-how about how to run SharePoint core engine as smoothly as possible. I can only imagine that SharePoint 2016 will include this in the form of helpful administration pages, automatic recommendations (a la Health Checks but tapping into Microsoft “recipe” base) and hopefully a streamlined SharePoint log management.

I’d also expect some new and streamlined farm topologies, also drawing from the experience of running Office 365 continously. I am sure that SharePoint core services and processes have been trimmed for any excess resource consumption and we should be able to benefit from it.

What should end users expect?

The end users will hopefully have a very good time with SharePoint 2016. Let me elaborate.

If you don’t connect SharePoint 2016 to the cloud and keep running strictly on-premise, I expect that most of the obnoxious SharePoint annoyances to be rid of. I expect a more direct and clean UI, with prominent navigation (a la Office 365 “jumbo” bar) and deemphasized secondary options that light up as you hover. It would declutter the UI a little bit and would reduce the user distraction. I don’t see revolutionary new features, but a subset of what has already been made available for Office 365. Hopefully, some of the new Cards UI or the NextGen portals interactions will trickle down to SharePoint 2016.

Once your SharePoint 2016 connects in a hybrid on-premise/cloud farm, I expect that the end users won’t be aware of the change. They will consume “the experiences”, irrespective of where they are coming from. If SharePoint experts help design seamless navigation and information architecture, I can hardly wait to see hybrid environments where SharePoint does its best: run behind the scenes and out of spotlight.

The Groups experience is the new overarching “concept” that includes a site, email group, social group, common files and so on. I expect the UI to further deemphasize “sites” and “lists” terminology to favour “groups” and “apps“.

As for social, the message is clear: Yammer is the only official way forward. The on-premise social capabilities will stay at SharePoint 2013 level (as per Jeff Teper’s post at SPC14).

In Search capabilities, I expect hybrid search on-premise/cloud to work without any jarring discrepancies. New display templates inspired in Delve UI could only mean icing on the cake.

I really hope for InfoPath replacement codenamed FoSL (Forms on SharePoint Lists) to be there, but there have been no news about it since it was previewed at SPC14. I wouldn’t bet on it but maybe it will make the way to the SP2016 code.

However, I also can see a few SharePoint Out-of-the-Box capabilities disappearing. For instance, Business Intelligence capabilities will most probably be replaced by the new free PowerBI tier and somehow linked to the powerful cloud BI. PerformancePoint Services? Dead end.

What should developers expect?

I have to admit that my developer community will be the one facing most changes, but they will be good ones. As much as the shift from the full-trust code to cloud app model (or FTM to CAM, to keep Microsoft three-letter-acronym, TLA, strategy in place) is painful, it is also exciting. We will be forced to get out of the cozy SharePoint development box we have been living since the beginning.

I don’t drink app Kool-Aid and I certainly see the painful shortcomings of the new model. But, I have to admit that the Office 365 PnP Guidance is steadily evolving and the gaping API holes in CSOM are being patched. We will have to keep unearthing them and make enough noise so they keep being fixed.

The app model is not going to follow sandboxed code or auto-hosted apps, it will be the backbone of the unified SharePoint development model. I expect the manual steps in the development workflow to become streamlined (being honest, typing AppRegNew.aspx manually is a major PITA inconvenience). The AAD authentication and authorization with OAuth will take the place of ACS middleware, and I also expect high-trust apps to add a few refinements too.

The Office 365 API “abstracts” SharePoint, Exchange and OneDrive into Files, Contacts, Calendar and other business-like entities. I expect this “development continuum” to keep evolving:

  • Azure Active Directory (AAD) as the one-stop authentication and authorisation gate
  • Office 365 API as a task-oriented, high-level API for our apps
  • SharePoint CSOM/REST API as implementation detail, low-level API for our apps
  • And, of course, SharePoint full-trust server object model for these things we still can’t run in the cloud

As I said, developers will also have exciting times with the new SharePoint 2016. And the best thing is that the most of the playing pieces are already available on Office 365 and SharePoint 2013, so that we can future-proof our developments now.

Conclusion

In the official announcement of SharePoint 2016 few days ago I can find confirmation (of what I was thinking will happen) and clarification (of the message to the SharePoint community). I can’t wait to see what new scenarios we will be building on these new foundations.

The sky cloud is the limit!

What do YOU think? Leave a comment to join the discussion!

Modelo de apps en detalle (IV): Construyendo una app High-Trust

En la última entrega de esta serie de posts sobre el modelo de apps en detalle, hablé de las aplicaciones Low-Trust y aplicaciones High-Trust de SharePoint 2013. Pues bien, hoy vamos a meternos en la harina y hacer una aplicación high-trust (también conocida como S2S, server-to-server) desde cero.

Preparando el entorno

Servicio de perfiles

Para comenzar a crear una aplicación high-trust, necesitamos configurar varias cosas en nuestro entorno local de SharePoint 2013. (Recordáis que las app high-trust sólo se pueden tener en un SharePoint on-premise, ¿verdad?). Las configuraciones no son muchas, pero es fácil olvidarse de una de ellas y luego tendremos problemas para investigar de donde viene el error.

Antes que nada, necesitamos que nuestro SharePoint 2013 tenga activo el servicio de perfiles de usuario y que además tenga indexados los perfiles de los usuarios que vamos a utilizar en la aplicación. Esto es necesario porque el servicio de autenticación de una app high-trust necesita “encontrar” el usuario en el servicio de perfiles de SharePoint para poder ejecutar las consultas en su nombre. Si el perfil del usuario no está, la autenticación fallará.

image

Realmente, el token de acceso que envía la app hacia SharePoint contiene el identificador del usuario, y SharePoint se basa en él para saber si el usuario es válido o no, buscándolo en su base de datos de perfiles. El identificador suele ser el SID del usuario Windows, su UPN o nombre de usuario de Active Directory. Si usamos otros sistemas de autenticación como FBA o Claims, los identificadores serán otros. Es estrictamente necesario que el identificador del usuario esté presente en su perfil y que no haya repeticiones. Si os pica mucho la curiosidad, hay un excelente post de Steve Peschka al respecto.

Certificado SSL

Para poder firmar el token de la app, necesitamos un certificado SSL. Mientras desarrollamos, podemos usar un certificado de desarrollo firmado por nosotros mismos (self-signed certificate). Luego, en producción, usaremos un certificado real.

Además, para que nuestra app se pueda comunicar con SharePoint de manera segura, necesitamos que la comunicación esté encriptada bajo HTTPS. Para ello, necesitaremos otro certificado SSL con la URL de la app. Esto no es necesario en desarrollo, donde podemos relajar la restricción y usar HTTP, pero en producción esto sería una imprudencia seria.

Para crear un certificado autofirmado, iremos a la consola de IIS y bajo el apartado “Server Certificates” y dentro de él la opción “Create Self-Signed Certificate“. Le daremos el nombre CertificadoHighTrust.

imageimageimage

Al final, exportaremos el certificado incluyendo la clave privada. Como contraseña le pondremos “password“. Al final, tendremos un fichero PFX con el certificado digital que usaremos en nuestra app. Este fichero tiene que estar en una carpeta accessible desde Visual Studio. En nuestro caso, como estamos desarrollando en una máquina de SharePoint, no tenemos que mover el fichero y lo tendremos en la ruta C:\Certificates\CertificadoHighTrust.pfx.

imageimage

También haremos una exportación del certificado sin la clave privada, para obtener el fichero CertificadoHighTrust.cer. Para ello, tenemos que ir a “Server Certificates” dentro del IIS, abrir el certificado y en la pestaña “Details” ir a la opción “Copy to file” indicando que no queremos la clave privada.

imageimageimage

Ahora vamos a comprobar los permisos necesarios para que SharePoint pueda procesar nuestros certificados. Los requerimientos son dos:

  • El application pool SecurityTokenServiceApplicationPool tiene que tener permisos de lectura sobre la carpeta de los certificados
  • El application pool de la aplicación web en la que instalaremos la app (en nuestro caso, la del puerto 80) tiene que tener permisos de lectura sobre la carpeta de los certificados

En nuestro caso, son las cuentas SPFarm y SQLSvc. Les daremos los permisos correspondientes en la carpeta Certificates.

imageimage

Ahora tenemos que hacer que SharePoint reconozca nuestro certificado. Abrimos una consola PowerShell de SharePoint y registramos el certificado como de confianza.

$publicCertPath = "C:\Certificates\CertificadoHighTrust.cer"
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($publicCertPath)
New-SPTrustedRootAuthority -Name "CertificadoHighTrust" -Certificate $certificate
 
image

Configurar trusted issuer

Una vez que SharePoint se fía de nuestro certificado, vamos a proceder a configurar lo que se conoce como un “emisor de confianza“. Esto no es más que indicarle a SharePoint que los tokens firmados por un “emisor de confianza” son de fiar. Y, ¿cómo sabe SharePoint qué un emisor es de confianza? Primero, el ID del emisor (un GUID que va dentro del token) tiene que existir en la configuración de SharePoint. Segundo, el token tiene que estar firmado por un certificado del que SharePoint se “fía” porque tiene su parte pública. Como esta parte del certificado la hemos hecho ya, sólo falta decirle a SharePoint el ID de nuestro proveedor de confianza. Puede ser cualquier GUID, y aquí vamos a utilizar el aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee (si usamos letras en el GUID, tienen que ser en minúscula). Bonito y fácil de recordar, ¿verdad?

Para registrar nuestro emisor de confianza, hay que ejecutar el siguiente código en PowerShell, a continuación del script de importación del certificado:

# CONFIGURAR EL TRUSTED ISSUER
$realm = Get-SPAuthenticationRealm
$specificIssuerId = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
$fullIssuerIdentifier = $specificIssuerId + '@' + $realm
New-SPTrustedSecurityTokenIssuer -Name "CertificadoHighTrust" -Certificate $certificate -RegisteredIssuerName $fullIssuerIdentifier –IsTrustBroker
iisreset

Ya podemos proceder a desarrollar la app, pero antes de esto vamos a permitir el uso del certificado autofirmado relajando los permisos de autenticación. (ojo: esto se puede hacer sólo en los entornos de desarrollo, nunca en producción).

# PERMITIR LLAMADAS CON EL CERTIFICADO AUTOFIRMADO
$serviceConfig = Get-SPSecurityTokenServiceConfig
$serviceConfig.AllowOAuthOverHttp = $true
$serviceConfig.Update()

Desarrollando la app

La app necesitará el certificado SSL y conocer la contraseña de su parte privada. Además, la cuenta bajo la que se ejecutará la app (el application pool del IIS) tiene que tener permisos para acceder a la ubicación del certificado.

Abrimos Visual Studio 2013 y creamos una app de SharePoint 2013. Al salir el asistente, le indicamos que queremos una app provider-hosted y que la identidad de la app se establecerá mediante certificado.

imageimageimage

Ahora tendremos una aplicación (en mi ejemplo, creada con Web Forms) que muestra el nombre del sitio actual de SharePoint donde está instalada la app. La solución consiste en dos proyectos: el proyecto de la app de SharePoint y el proyecto web donde está la lógica de la app.

image

El código que hace la llamada a SharePoint es muy sencillo:

protected void Page_Load(object sender, EventArgs e)
{
// The following code gets the client context and Title property by using TokenHelper.
// To access other properties, you may need to request permissions on the host web.

Uri hostWeb = new Uri(Request.QueryString["SPHostUrl"]);

using (var clientContext = TokenHelper.GetS2SClientContextWithWindowsIdentity(hostWeb, Request.LogonUserIdentity))
{
clientContext.Load(clientContext.Web, web => web.Title);
clientContext.ExecuteQuery();
Response.Write(clientContext.Web.Title);
}
}

Como se puede ver, el contexto de SharePoint se establece usando la clase auxiliar TokenHelper con el método GetS2SClientContextWithWindowsIdentity. Esta llamada obtiene un contexto de high-trust app (S2S, server-to-server) usando la identidad del usuario Windows que está ejecutando la aplicación. Esta es la configuración por defecto, pero se puede modificar para usar la identidad federada por ejemplo.

Ejecutando la aplicación, nos sale el diálogo de otorgar permisos a la aplicación, y al aceptarlo, podemos ver el título del sitio de SharePoint, “Home”.

imageimage

Dentro del TokenHelper

Vamos a ver como nuestra aplicación construye el token. Si miramos el método GetS2SClientContextWithWindowsIdentity, veremos que su cuerpo tiene 4 lineas de código.

public static ClientContext GetS2SClientContextWithWindowsIdentity(
Uri targetApplicationUri,
WindowsIdentity identity)
{
string realm = string.IsNullOrEmpty(Realm) ? GetRealmFromTargetUrl(targetApplicationUri) : Realm;

JsonWebTokenClaim[] claims = identity != null ? GetClaimsWithWindowsIdentity(identity) : null;

string accessToken = GetS2SAccessTokenWithClaims(targetApplicationUri.Authority, realm, claims);

return GetClientContextWithAccessToken(targetApplicationUri.ToString(), accessToken);
}
 
Primero se obtiene el dominio (realm) de la aplicación. Acto seguido se obtiene un JWT (JSON Web Token) que contiene los claims del usuario actual de Windows. Una vez que tenemos el token JWT, lo empaquetamos en un token de acceso con el método GetS2SAccessTokenWithClaims. Al final, el token de acceso lo intercambiamos por un contexto cliente de SharePoint.
La parte interesante es ver como se hace el token. Si miramos el método GetS2SAccessTokenWithClaims, veremos que acaba en un método IssueToken, que está construyendo el token de acceso.
 

private static string IssueToken(
string sourceApplication,
string issuerApplication,
string sourceRealm,
string targetApplication,
string targetRealm,
string targetApplicationHostName,
bool trustedForDelegation,
IEnumerable<JsonWebTokenClaim> claims,
bool appOnly = false)
{
if (null == SigningCredentials)
{
throw new InvalidOperationException("SigningCredentials was not initialized");
}

#region Actor token

string issuer = string.IsNullOrEmpty(sourceRealm) ? issuerApplication : string.Format("{0}@{1}", issuerApplication, sourceRealm);
string nameid = string.IsNullOrEmpty(sourceRealm) ? sourceApplication : string.Format("{0}@{1}", sourceApplication, sourceRealm);
string audience = string.Format("{0}/{1}@{2}", targetApplication, targetApplicationHostName, targetRealm);

List<JsonWebTokenClaim> actorClaims = new List<JsonWebTokenClaim>();
actorClaims.Add(new JsonWebTokenClaim(JsonWebTokenConstants.ReservedClaims.NameIdentifier, nameid));
if (trustedForDelegation && !appOnly)
{
actorClaims.Add(new JsonWebTokenClaim(TrustedForImpersonationClaimType, "true"));
}

// Create token
JsonWebSecurityToken actorToken = new JsonWebSecurityToken(
issuer: issuer,
audience: audience,
validFrom: DateTime.UtcNow,
validTo: DateTime.UtcNow.AddMinutes(TokenLifetimeMinutes),
signingCredentials: SigningCredentials,
claims: actorClaims);

string actorTokenString = new JsonWebSecurityTokenHandler().WriteTokenAsString(actorToken);

if (appOnly)
{
// App-only token is the same as actor token for delegated case
return actorTokenString;
}

#endregion Actor token

#region Outer token

List<JsonWebTokenClaim> outerClaims = null == claims ? new List<JsonWebTokenClaim>() : new List<JsonWebTokenClaim>(claims);
outerClaims.Add(new JsonWebTokenClaim(ActorTokenClaimType, actorTokenString));

JsonWebSecurityToken jsonToken = new JsonWebSecurityToken(
nameid, // outer token issuer should match actor token nameid
audience,
DateTime.UtcNow,
DateTime.UtcNow.AddMinutes(10),
outerClaims);

string accessToken = new JsonWebSecurityTokenHandler().WriteTokenAsString(jsonToken);

#endregion Outer token

return accessToken;
}

 
El siguiente artículo de MSDN explica las partes del token de acceso que construye nuestra aplicación. En esencia, se construye un token de “actor” que identifica el usuario actual. Este token de actor se hace con los claims del usuario creados previamente. El token se expide para la aplicación actual (parámetro “aud“) y lo firma nuestro certificado (propiedad SigningCredentials). Este token interno se rodea de un token externo que no está firmado digitalmente.
 
Como se puede ver, el misterioso “token de acceso” es nada más que una cadena de texto con datos en formato JSON que describen una identidad de aplicación y de usuario.
 
La última parte “misteriosa” es saber como obtiene SharePoint el objeto ClientContext a partir de un token? Es muy sencillo: adjunta el token de acceso en la cabecera de la petición a la API, y al volver ya estará inicializado correctamente el contexto. Ahora lo veremos con Fiddler.

Las pruebas con Fiddler

Si abrimos Fiddler para ver el tráfico HTTP entre la aplicación y SharePoint, veremos que la aplicación hace una llamada a la API CSOM (/_vti_bin/client.svc/ProcessQuery). Si miramos la petición, en las cabeceras veremos un parámetro llamado Authentication con el valor “Bearer: ” seguido de un texto codificado en Base64. Este es nuestro token de acceso.

image

Si usamos alguna herramienta como JWT.io para descodificar el token, podemos ver su estructura.

image

Para más información sobre la estructura del token, hay un magnífico post de Kirk Evans al respecto.

Conclusión

Espero haber desmitificado un poco el mundo de las aplicaciones High-Trust con este post. Como veréis, nos permite usar el modelo de apps sin tener que estar en la nube, lo que es un paso importante para poder adaptar nuestros desarrollos a los escenarios híbridos que parece que serán mucho más habituales en el futuro.

¿Habéis trabajado con este modelo? ¿Podéis compartir vuestras experiencias? ¡Los comentarios os esperan aquí mismo, debajo de este post!

Welcome back / Bienvenidos de nuevo

I have been busy for the last few days migrating my blogs to a new platform, as you can see. In this post I will summarize what I have discovered in the process.

Warning for the Spanish-speaking visitors: He migrado blog en español SPBlogEdin.blogspot.com a mi blog principal EdinKapic.com. Aquí publicaré posts en los dos idiomas, según el tema a tratar. Para facilitar la navegación, todos los posts en español/castellano tienen la categoría Español y los posts del antiguo blog se redireccionarán aquí automáticamente. ¡Bienvenidos! 

D9NnG0

My blogging infancy: Blogger

My blogging adventure began with a small personal blog in Spanish on Blogger, called the Midnight Bard” (Bardo de medianoche) where I wrote about my everyday musings and ramblings. It seemed natural to me to start my own professional blog when I started working with SharePoint back in the mid-2000s.

So, a new blog was born. I struggled with a name but I named it “Res Cogitans” with the URL edinkapic.blogspot.com, for the famous Descartes motto of “I think, therefore I am”. I started to write more or less regularly each month about the things I’ve learned or come across in my job as a SharePoint consultant. Over time, I gained some visits and references.

In 2009 I thought of adding a local voice to by blogging. I started another blog, called SPBlogEdin where I wrote about the Spanish SharePoint scene and about the topics I found that the Spanish-speaking bloggers weren’t writing about. It was always meant to be a complement to the main, English-speaking blog, which had a more diverse and dispersed audience.

I also added my personal domain, EdinKapic.com, to the main blog to make things easier when moving to my own hosted blog in the future.

My failed attempt: Orchard

In the last years I was thinking about unifying the two blogs into the main one. I installed Orchard CMS in one of my websites in Azure and started playing with it. It looked very promising, and it allowed me to actually write some code in ASP.NET MVC if I had to. But, the more I used it to toy around and trying to migrate my blogs from Blogger to Orchard, the more I saw that the Orchard platform is failing in the same fashion as the Windows Phone: a disengaged ecosystem. The CMS is very sleek but there are just so many plugins and themes for it. It can’t compete with the mature WordPress ecosystem.

So, few weeks ago I finally decided for WordPress as my final platform. I provisioned a website in Azure and I installed WordPress from the app gallery. It was a charm! The setup was effortless and the default settings are just right for a WordPress newbie like me. I still have to brush up the design and other goodies but it looks to me like a very solid foundation to build upon.

Migrating the posts from Blogger to WordPress without losing SEO rankings

I had hundreds of posts in the two blog accounts and I didn’t want to manually migrate them, as I was doing with Orchard BlogML import plugin. I also wanted to keep my SEO ranking by redirecting the posts from Blogger to WordPress without misdirection.

Luckily, I stumbled upon a very helpful couple of posts about migrating from Blogger to WordPress and I followed the instructions: https://rtcamp.com/blogger-to-wordpress/tutorials/permalink-seo-migration/, http://john.do/migrate-blogger-wordpress/ and http://www.mypregnancybaby.com/moving-blogger-wordpress/.

  • I added the Blogger Importer plugin to my WordPress. It authenticated with no problems to Blogger and pulled all my posts, tags and images to my WordPress. I was amazed. I set WordPress permalinks to the same schema that Blogger uses (year, month and title) and I run a quick fix (a PHP file uploaded to WordPress by FTP) that truncated the permalinks in the same fashion Blogger does it. I did this in order to keep my EdinKapic.com links for the main Blogger blog intact and to ease the redirection from SPBlogEdin to EdinKapic, too.
  • I edited my DNS settings for EdinKapic.com and I redirected them from Blogger to my Azure instance, then I disabled custom domain for my Blogger blog.

The only error I has was that the posts from Blogger weren’t redirected properly to the new WordPress blog. I did a couple of tricks from these blogs, but finally I decided to try Blogger 301 Redirect Plugin and it did the trick. I had to copy-paste the Blogger template from the plugin to both Blogger blog settings and that was it. Flawlessly!

As an icing on the cake, I categorized my Spanish-language posts coming from SPBlogEdin into Español category and the English-language posts into English category for easier navigation.

Lessons learned

First, and the most painful lesson, was to open my eyes and see that Orchard is not end-user ready yet. Technically it is, even the interface is very nice but the ecosystem is not.

Second, I always preach that one has to step out of his/her own comfort zone. I did it with WordPress and PHP. Not my cup of tea, yet, but a nice little challenge for the coming weeks.

Where Are You Going, SharePoint?

I would like to do a short retrospective look to the rapidly-changing landscape of SharePoint in the last few years, followed by a personal opinion on the future of SharePoint. It is a fairly long rant piece, so it is advised to have at least 15 minutes at hand to read it throughly.

Note: It is obvious to me but won’t hurt to mention that everything mentioned here expresses my own opinion and my own opinion only. 

How SharePoint has evolved from 100% on-premise to hybrid and cloud

In the early years of SharePoint development, in the early 2000’s, the development on the SharePoint platform was restricted to the server-side code. You could only code what you could put inside the SharePoint box. Of course, the server object model let you do all sort of things, as it is the same model that was used internally by SharePoint (at least a significant portion of it). But, you were responsible for the performance and side-effects of that code as it was run inside the SharePoint very own IIS process or service. As long as you did your homework, the future was bright for you.

For the less prone to scare the IT people by stubbornly insisting on putting custom code in the SharePoint box, there was a second option: the ASMX web services. Clumsy as they were, they were the only option if you couldn’t (or wouldn’t) put your custom code in SharePoint servers.

Over the years, that approach was good and great. Best practices and guidance began to evolve and we saw less and less SharePoint "crime scenes" (as Matthias Einig puts it).

And then, Microsoft began hosting its own product, by launching SharePoint Online inside its BPOS (Business Productivity Suite).

Suddenly, Microsoft began feeling how it was to actually host SharePoint. They began to experience the same nightmares IT admins over the world had to experience when they wanted to patch their SharePoint servers. They weren’t shielded any more from the customers running SharePoint: they were the customer running it and they were responsible for their tenants and their SharePoint data.

 

image

SharePoint Online administrator Installing cumulative updates

During the first teething years of SharePoint Online, one thing emerged clear and painfully clear. It was the insight that what worked for a single datacenter doesn’t work for cloud environment, with multiple tenants. The customizations wouldn’t scale and what’s even worse, they would break with the frequent updates to the SharePoint version on Microsoft servers. Even with the stop-gap measure of sandboxed code, the customizations that involve code didn’t behave as Microsoft wanted for their hosting environment.

In other words, during the golden years SharePoint allowed massive customization with server-side code. You could build web parts, web controls, timer jobs, application pages and all sort of server-side extensions. You were allowed to shoot yourself in the foot and bring your SharePoint to a halt, because it was your own datacenter. You (or hopefully your governance committee) would decide what was more important. Also, you could put arbitrary code in your customizations and be happy with it. However, when you depend on your SharePoint servers running smoothly and flawlessly, it is only possible with plain-vanilla SharePoint structures with no customization. As careful as you were, there was always a penalty for your custom code. As for the security of that code, things such as CAS policies were a step in a right direction but many customers just turned their code trust level in web.config to Full and live with it.

How scalability and performance have taken precedence over customization

But, not Microsoft. Microsoft could not allow SharePoint Online to fail. Also, the prospect of selling monthly licenses instead of a one-time license was meant to inject significant cash flow into Microsoft Office Division balance sheets. Between the need of the "less desirable" one-time customers (who wanted custom code) and the need of Microsoft SharePoint Online cash cow (who wanted scalability and performance), the scales tipped to the side of the scalability (pun not intended).

So, things such as client object model and finally  "cloud application model" came to be. Your code was allowed, as long as it run outside SharePoint. The infrastructure was put into place to allow your code to call into SharePoint without the need for explicit credentials. Your calls to SharePoint were carefully securized, throttled and channeled to allow for non-customized SharePoint parts to run without being blocked or delayed.

SharePoint erred on the side of the performance, banishing server code on cloud deployments and slowly banishing (or deemphasizing) even the declarative customization. As long as you are fine with the evolving standard SharePoint Online features, you are good to go. Want more? Plug in an app, even if the client object model didn’t allow for all the richness of the functionality that server object model did. Your application would run on your servers, consuming your resources and leaving their SharePoint instances out of your bad influence.

What’s the future for on-premise SharePoint customers?

But, what Microsoft doesn’t seem to get (or does seem to ignore) is the reality of the corporate customers. SharePoint was never a software for small companies. Yes, you could run a SharePoint Foundation (or Windows SharePoint Services before that) and it was free, but the need for the corporate-scale collaboration and knowledge management that SharePoint ushers is something that goes with the medium and big companies, as it breaks department silos and culture barriers. Also, the hefty price tag for the SharePoint license didn’t help.

Now, SharePoint Online as part of the Office 365 package is affordable. Even smaller companies could take advantage of it. But, are they doing it? In my experience, the vast majority of the Office 365 customers in small companies are more interested in Exchange Online, because running a mail infrastructure is something every company does, small and big, while running SharePoint is something that is not everyone’s piece of cake. Whenever I listen to Microsoft marketing machine trumpeting about "millions of Office 365 users" I can’t help asking myself how many of them are only using Exchange Online out of the whole Office 365 package. (Answer: I don’t know but I imagine that a vast majority).

So, here we have Microsoft cranking out new features to suit their "imaginary" customers that want wizards, flashy UI and shiny settings, while the real SharePoint customers with deep pockets are placed between a rock (declining investment in new on-premise features) and a hard place (the fact that Office 365 deployments customization options are limited at best). They need freedom to customize their SharePoint to suit their needs, and right now the capabilities of the app model and Office 365 app model are just not enough. Not to mention the need for UI customization, which is the first thing that corporate customers want. They now have to jump through hoops just to deploy and keep their master pages, CSS and JS files from breaking with each SharePoint Online update. Not good.

As a side note, the need of the on-premise clients for top-of-the-notch social without hassle was the driver that made Beezy, the company I work for, possible. If that capabilities were available on SharePoint Server, we would be out of business even before starting.

That has been by experience with the SharePoint customers that are more or less "invited" to become Office 365 customers. Only a few of them venture into the hybrid world of connecting their on-premise farm with the Office 365 farm. In the future, this number will increase, but will it ever be so overwhelming as to dispense with the on-premise SharePoint. My opinion is that it won’t. We won’t see the end of on-prem SharePoint in the foreseeable future.

image

What lies ahead?

Why I so firmly believe this? First, the cloud is not fully parring the on-prem world. I know that SharePoint is just the "expression" of the solution for a business need, but right now going to the cloud makes you lose certain flexibility (in the depth of your customization) and gain another flexibility (on the deployment and operation costs). So, there is still a gap between what cloud enables you to do and what SharePoint Server enables you to do. In the future I believe that the gap will be closing, but not disappearing. Yet, the cloud will allow us to plug-and-play different services and technologies to build our solutions. It will widen our choice of options, not narrow it down.

I think that we will see more and more hybrid deployments but there will always be on-premise deployments with workloads that are only suitable for that scenario. Conversely, there will be (and there are) cloud-only scenarios such as machine-learning powered Delve and Outlook Clutter that can’t be deployed on-premises as it requires some really BIG data (pun intended) to work properly. But, for a big company with their own processes and dynamics, I just can’t see a cloud-only environment, no matter how hard I try to imagine.

However, the love SharePoint Server got from Microsoft during the last decade has gone to the new, cloudy sweet-heart: SharePoint Online. It is a fact.

I would like to imagine the future SharePoint Server (call it vNext or 2015 or whatever) to be more like Windows 8 than Windows XP. Windows XP had to be replaced with Windows Vista and it was a fairly traumatic deployment. But, Windows 8 was "updated" to Windows 8.1 with little or no hassle, and you ended up with very different Windows. In another times, it maybe would be called Windows 9 and would have to be deployed in a "traumatic" fashion. In the same light, I can imagine future SharePoint getting periodic updates (as Windows or Visual Studio do), with new and enhanced features, not only bug fixes. I can imagine that the main development branch is the cloud one, and that not all the features can translate to the on-premise version, but I can’t imagine a reason for not having that kind of quick, iterative development for the cloud that "trickles down" to the SharePoint Server periodically. Gone are the days of 3-year cycle for SharePoint Server, the cycle is now merely couple of weeks long. It is perfect for the cloud, but I think (and hope) that it will also add enhancements to the on-premise customers. Let’s face it, SharePoint is a fairly mature product with a broad set of features. What corporate customer need is not to have the latest eye-candy, "sexy" features but fixing the long-standing pains of basic, staple SharePoint features. I’m not saying that there is no value in the new features, there certainly is. However, there is no excuse for having half-baked functionality any more in SharePoint, online or not.

What’s the future for SharePoint specialists?

In the end, where it leaves us, the SharePoint developers, architects, consultants, Power Users, customizers and so on?

It is normal to feel a certain amount of dizziness and feeling of being let down. Microsoft messaging to the customers is something chronically ambiguous, although the transparency set by some of the teams in Microsoft is exemplary in the last years. They are improving. But still, giving mixed signals or flatly ignoring the rightful questions of the corporate users is making them question whether Microsoft commitment can be taken seriously. (Yes, I mean Silverlight and sandbox solutions and autohosted apps and so on).

Well, let’s face it: the world is changing. It never stopped changing. We can’t cling in the comfort zone anymore, if we ever did. I think that the pace of technological change has accelerated and with it comes a natural reluctance to change (I explain some of it in my Pluralsight course about human behavior). We have to embrace that change.

What does it mean? It doesn’t mean drinking the Microsoft Kool-Aid and firmly believing their marketing messages. They have all the right to send whatever signals they want, but we SharePoint specialists are paid to assess and advise our clients about their technology, not to repeat it like trained parrots. In many cases the message is perfectly valid, but in many cases it is not and we should know better. As I said before, SharePoint (and Office 365) were, are and will be the tools to build solutions for business needs. Not the solutions themselves. Sometimes, we lose track of this simple fact.

It means that we should learn, practice and keep raising the bar. We are on the front line of technology and we should be prepared. Luckily, it has never been more easier to learn new things and find guidance (The OfficeDev PnP team is just the example of that). Dedicate some time to play with the cloud, to experiment with apps, JavaScript and hybrid environments, to fiddle with Delve and Office Graph and to keep pushing the envelope. You will be acquiring new tools to build your technological solutions to business problems. That’s your job as a technology specialist.

What are your opinions? I’m eager to know them.

All images provided by Freeimages.com.

Apps de SharePoint vs Apps de Office 365

En estos últimos días hay un debate calentito sobre el futuro del modelo de apps de SharePoint.

El día 22 de diciembre el gran gurú SharePointero Sahil Malik publicó el siguiente post (SharePoint App Model: Rest in Peace) que abrió un poco la caja de Pandora sobre el modelo de las apps. Su argumento es que el modelo de las apps introducido con SharePoint 2013 está quedando arrinconado a favor del nuevo modelo de apps de Office 365.

Veamos un poco en detalle este debate. Primero voy a presentar a los boxeadores y luego veremos como queda el match.

image

Los contrincantes en el ring

El modelo de apps de SharePoint 2013

Ya llevamos unos años con este modelo, que todavía se usa bien poco para los proyectos "serios". En esencia, tenemos a SharePoint como proveedor de datos y una aplicación remota (provider-hosted) o JS consumiendo esos datos mediante interfaz REST (o su abstracción JSOM/CSOM). Para la autenticación entre la aplicación remota y SharePoint se usan los tokens OAuth, con la posible intermediación de Azure ACS.

Apps for SharePoint hosting options

El modelo de apps de Office 365

Este nuevo modelo introducido en 2014 es una abstracción por encima de los servicios de la plataforma de Office 365. Los servicios se exponen mediante la interfaz REST y están pensados para proporcionar operaciones de alto nivel como "obtener contactos" o "leer documentos". Las aplicaciones pueden hacerse en cualquier tecnología y para autenticar la aplicación con Office 365 se dispone de Azure Active Directory (AAD) que almacena las credenciales de los usuarios y nos devuelve sus tokens para usarlos contra la API de Office 365.

Development stack for creating solutions that use Office 365 APIs. Select your developer environment and language. Then use Azure single sign-on authentication to connect to the Office 365 APIs.

¿En que se diferencian?

En muchas cosas, pero en mi opinión las diferencias básicas son:

Apps de SharePoint

Apps de Office 365

Operaciones de bajo nivel

Operaciones de alto nivel

Usan Azure ACS

Usan Azure AD

Sólo pueden acceder a SharePoint

Pueden acceder a los servicios de O365, Azure Active Directory y a SharePoint

Permiten el modelo high-trust para instalaciones "on premise"

Sólo se puede usar con la instalación cloud o híbrida

 
Si queréis ver una comparativa muy bien ilustrada con pantallazos, este post de Chris O’Brien es el mejor recurso.

Entonces, ¿qué hago?

Visto este percal al que Microsoft nos ha llevado, uno se puede preguntar sobre que esperar en el futuro. ¿Desaparecerá el modelo de las apps en la siguiente versión de SharePoint como ya pasó con las aplicaciones sandbox? Si voy a hacer una app para SharePoint, ¿uso el modelo SharePoint o el de Office 365?

Yo creo que el modelo de las apps de SharePoint 2013 no desaparecerá, sino que será uno más a considerar. Ahora tenemos el modelo Full-trust code (FTC) de toda la vida, soportado 100% on-premise, modelo de apps soportado en el cloud y on-premise y en la nueva versión de SharePoint tendremos la opción de usar el modelo de apps de Office 365, si tenemos un despliegue en la nube o híbrido. Lo que no vamos a ver es "un modelo que los gobierne a todos". Al abrirse el abanico de opciones de hospedaje de SharePoint, Microsoft ha abierto el abanico de soluciones de código a medida para encajar mejor en cada uno de ellos.

El modelo de apps de Office 365 es más limpio que el de apps de SharePoint, pero es comprensible dada su naturaleza. Es un modelo abstracto de servicios sobre toda la plataforma Office 365 y no tiene que preocuparse de detalles técnicos como las insufribles páginas AppRegNew.aspx, ClientIds y tonterías varias. Si estamos haciendo una app empresarial que combina la gestión de documentos, listas, contactos, mails etc., el modelo de Office 365 es más directo y fácil.

El modelo de apps de SharePoint lo veo más bien enfocado a las soluciones que sólo involucren a SharePoint o cuando no estamos en el mundo Office 365, ni siquiera en modo híbrido. Igualmente, si sabemos lo que hacemos y quedemos toda la potencia de SharePoint para nosotros en nuestro datacenter, el modelo de código de servidor no va a desaparecer.

Y vosotros, ¿qué opináis?

High-Trust SharePoint Apps, Token Lifetime and MemoryCache

In the last months I have been busy working on a project that includes high-trust on-premises SharePoint 2013 app that is accessed by many people at the same time. Each user is issued an access token that authenticates the user and points to his or her SharePoint site.

The problem that began surfacing is that by default high trust access tokens have a lifetime of only 10 minutes. So, as we cached the token in memory, after 10 minutes SharePoint would start giving 401 Unauthorized errors, due to the token being expired.

Extending the lifetime of the access token

The solution to the problem involved increasing the lifetime of the access token to 60 minutes. This is a fairly simple change in TokenHelper.cs file.

Find the IssueToken private method in TokenHelper.cs and in the "Outer Token" region, change the line that creates the JWT token to this:

JsonWebSecurityToken jsonToken = new JsonWebSecurityToken(
nameid, // outer token issuer should match actor token nameid
audience,
DateTime.UtcNow,
DateTime.UtcNow.AddMinutes(60),
outerClaims);

MemoryCache

In addition, we have added a MemoryCache single instance in-memory application cache (available in NET 4.5). The access tokens are added to the cache with an absolute lifetime of 60 minutes, so they will expire at the same time as their lifetime in SharePoint. Once evicted from the memory cache, the access tokens are recreated with an additional 60 minutes of lifetime and stored in the cache again.

CacheItem item = new CacheItem(key, value);
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60);
policy.RemovedCallback = new CacheEntryRemovedCallback((args) => {
if(args.CacheItem.Value is IDisposable) {
((IDisposable) args.CacheItem.Value).Dispose();
}
});
_cache.Set(item, policy);

Bonus: Refactored for reusability and testing

In order to make our code simpler and more understandable, we added the caching capability as a provider, exposed by the ICachingProvider interface. The ICachingProvider interface exposes the operation for getting an object from the cache with a specific key. We wanted the code to be reusable not only for tokens but for all suitable situations.

public interface ICachingProvider
{
T GetFromCache<T>(string key, Func<T> cacheMissCallback);
}

The operation GetFromCache is a generic method that allows the user to get a typed object from the cache by providing a string key and a type. Moreover, the operation requires a fallback method. This fallback method is a generic Func<T> that returns an object of type T. The cache implementation then uses the function delegate (a lambda expression, in most cases) if the specified key is not found. By invoking the delegate, we get the object from its source (as if there were no cache) and store it in the cache with the given key.

The full CachingProvider code is displayed here. There are two auxiliary methods to add and retrieve items from the cache. In order to avoid concurrency exceptions, the access to the MemoryCache instance is protected under a lock object called padLock.

public class CachingProvider : ICachingProvider
{
protected MemoryCache _cache = MemoryCache.Default;
protected static readonly object padLock = new object();

public CachingProvider()
{

}

private void AddItem(string key, object value)
{
lock (padLock)
{
CacheItem item = new CacheItem(key, value);
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60);
policy.RemovedCallback = new CacheEntryRemovedCallback((args) => {
if(args.CacheItem.Value is IDisposable) {
((IDisposable) args.CacheItem.Value).Dispose();
}
});
_cache.Set(item, policy);
}
}

private object GetItem(string key)
{
lock (padLock)
{
var result = _cache[key];
return result;
}
}

public T GetFromCache<T>(string key, Func<T> cacheMissCallback)
{
var objectFromCache = GetItem(key);
T objectToReturn = default(T);
if (objectFromCache == null)
{
objectToReturn = cacheMissCallback();
if (objectToReturn != null)
{
AddItem(key, objectToReturn);
}
}
else
{
if (objectFromCache is T)
{
objectToReturn = (T) objectFromCache;
}
}
return objectToReturn;
}
}

One added bonus of having an interface is that we had two implementations of the interface: CachingProvider (the normal caching service) and DummyCachingProvider (that simply bypassed the cache and returned the result of the Func delegate invocation). In this way we could disable caching by injecting the correct instance of the caching provider and also it benefitted unit testing as we could test both the cached and non-cached code paths.

public class DummyCachingProvider : ICachingProvider
{

private void AddItem(string key, object value)
{

}

private object GetItem(string key)
{
return null;
}

public T GetFromCache<T>(string key, Func<T> cacheMissCallback)
{
return cacheMissCallback();
}
}

Business Value of Social Computing (III)

Welcome to the third installment of the business value of social computing post series. You can review the first and the second post of the series, if you haven’t already done so. In this post I will higlight two main problems with the social computing today.

We have seen what social means and what parts make up the mosaic of social computing. We have also seen why do businesses use social and what benefits it can bring. However, businesses still cope with lack of adoption. Why is that? From our experience in deploying Beezy social network for SharePoint to our many customersmany customers, we have learned some key insights on the social success patterns, in the companies that have succeeded in social. We have also learned what doesn’t work, so to avoid it in future deployments.

Here are the two pain points with social computing today:

Social Is Slow to Grow

The enterprise social networks (ESN) are still a young technology. Even if they have been available for years now, they haven’t become mainstream until few years ago. If you compare it to a well-established technology such as OLAP datawarehouses or business process management (BPM), it still has to crystallize into best practice patterns and actionable guidance.

There are several reasons for this slowness in the adoption of social. It is a very disruptive technology, because it changes the way people work and share information. A change of this magnitude needs a lot of time to "stick" (some studies say that it can take 18 months for a change to become durable). In today’s fast-paced world, the instantaneous results we expect of deploying any new technology won’t be there with social. Embracing social in the company is a marathon, not a sprint. It has to be nurtured, encouraged and guided.

Also, the lack of clear guidance and the variety of use cases of social in different sectors and companies don’t help much to speed up the adoption. There are too many contradictory messages and guidelines out there. A company that wants to adopt social computing can’t just expect to plug it in and reap the benefits. It’s a slow-paced progress, but it’s also a steady journey. Have patience!

There Are Common Pitfalls in the Way

The second frequent pain point with ESN has to do with the common pitfalls and barriers to the successful adoption of social at work. Over and over again, companies have faced lack of social adoption and the root causes are very common between all of them.

The first cause of social failure is the lack of an overall strategy for social. As I said before, social computing isn’t a database engine that is simply plugged in. It needs to be deployed in the context of a greater business strategy, in order to achieve business goals. If you deploy first and ask for the business need later, you are putting the cart before the horse. Keep in mind that social is a tool, and not an end in itself. (You can check part 2 of the series for a refresher on the common business cases for social).

The second common cause of lack of adoption is having too many competing priorities. It means that the company is picking many desirable technology projects but doesn’t really commit hard on any one of them. While not putting all your eggs in the same basket is a common sense, not committing is akin to a paper tiger: looks strong from the distance but shaky when confronted. You will need "teeth" (business support and gravitas) to overcome resistance to change, and starting without that support is a recipe for a disaster. If you don’t have executive sponsors for your social endeavour, don’t even start. I have warned you Smile

The third cause is not having a clear business case. As I said few paragraphs before: every business has a unique case for social. You will have to find yours. If it’s lackluster or too broad, your users won’t find any value in the extra hassle that is needed to learn the new technology. It won’t work for them. Make sure to find your value proposition for social in order to start on the right track.

Summary

We have seen how social is by its nature slow to grow and how it’s easily derailed by common pitfalls. If you plan your social computing deployment having into account these facts, your chances of success will be higher.

 

Do you have something to add to this conversation? Make sure to leave a comment below.