Welcome back / Bienvenidos de nuevo

I have been busy for the last few days migrating my blogs to a new platform, as you can see. In this post I will summarize what I have discovered in the process.

Warning for the Spanish-speaking visitors: He migrado blog en español SPBlogEdin.blogspot.com a mi blog principal EdinKapic.com. Aquí publicaré posts en los dos idiomas, según el tema a tratar. Para facilitar la navegación, todos los posts en español/castellano tienen la categoría Español y los posts del antiguo blog se redireccionarán aquí automáticamente. ¡Bienvenidos! 


My blogging infancy: Blogger

My blogging adventure began with a small personal blog in Spanish on Blogger, called the Midnight Bard” (Bardo de medianoche) where I wrote about my everyday musings and ramblings. It seemed natural to me to start my own professional blog when I started working with SharePoint back in the mid-2000s.

So, a new blog was born. I struggled with a name but I named it “Res Cogitans” with the URL edinkapic.blogspot.com, for the famous Descartes motto of “I think, therefore I am”. I started to write more or less regularly each month about the things I’ve learned or come across in my job as a SharePoint consultant. Over time, I gained some visits and references.

In 2009 I thought of adding a local voice to by blogging. I started another blog, called SPBlogEdin where I wrote about the Spanish SharePoint scene and about the topics I found that the Spanish-speaking bloggers weren’t writing about. It was always meant to be a complement to the main, English-speaking blog, which had a more diverse and dispersed audience.

I also added my personal domain, EdinKapic.com, to the main blog to make things easier when moving to my own hosted blog in the future.

My failed attempt: Orchard

In the last years I was thinking about unifying the two blogs into the main one. I installed Orchard CMS in one of my websites in Azure and started playing with it. It looked very promising, and it allowed me to actually write some code in ASP.NET MVC if I had to. But, the more I used it to toy around and trying to migrate my blogs from Blogger to Orchard, the more I saw that the Orchard platform is failing in the same fashion as the Windows Phone: a disengaged ecosystem. The CMS is very sleek but there are just so many plugins and themes for it. It can’t compete with the mature WordPress ecosystem.

So, few weeks ago I finally decided for WordPress as my final platform. I provisioned a website in Azure and I installed WordPress from the app gallery. It was a charm! The setup was effortless and the default settings are just right for a WordPress newbie like me. I still have to brush up the design and other goodies but it looks to me like a very solid foundation to build upon.

Migrating the posts from Blogger to WordPress without losing SEO rankings

I had hundreds of posts in the two blog accounts and I didn’t want to manually migrate them, as I was doing with Orchard BlogML import plugin. I also wanted to keep my SEO ranking by redirecting the posts from Blogger to WordPress without misdirection.

Luckily, I stumbled upon a very helpful couple of posts about migrating from Blogger to WordPress and I followed the instructions: https://rtcamp.com/blogger-to-wordpress/tutorials/permalink-seo-migration/, http://john.do/migrate-blogger-wordpress/ and http://www.mypregnancybaby.com/moving-blogger-wordpress/.

  • I added the Blogger Importer plugin to my WordPress. It authenticated with no problems to Blogger and pulled all my posts, tags and images to my WordPress. I was amazed. I set WordPress permalinks to the same schema that Blogger uses (year, month and title) and I run a quick fix (a PHP file uploaded to WordPress by FTP) that truncated the permalinks in the same fashion Blogger does it. I did this in order to keep my EdinKapic.com links for the main Blogger blog intact and to ease the redirection from SPBlogEdin to EdinKapic, too.
  • I edited my DNS settings for EdinKapic.com and I redirected them from Blogger to my Azure instance, then I disabled custom domain for my Blogger blog.

The only error I has was that the posts from Blogger weren’t redirected properly to the new WordPress blog. I did a couple of tricks from these blogs, but finally I decided to try Blogger 301 Redirect Plugin and it did the trick. I had to copy-paste the Blogger template from the plugin to both Blogger blog settings and that was it. Flawlessly!

As an icing on the cake, I categorized my Spanish-language posts coming from SPBlogEdin into Español category and the English-language posts into English category for easier navigation.

Lessons learned

First, and the most painful lesson, was to open my eyes and see that Orchard is not end-user ready yet. Technically it is, even the interface is very nice but the ecosystem is not.

Second, I always preach that one has to step out of his/her own comfort zone. I did it with WordPress and PHP. Not my cup of tea, yet, but a nice little challenge for the coming weeks.

Where Are You Going, SharePoint?

I would like to do a short retrospective look to the rapidly-changing landscape of SharePoint in the last few years, followed by a personal opinion on the future of SharePoint. It is a fairly long rant piece, so it is advised to have at least 15 minutes at hand to read it throughly.

Note: It is obvious to me but won’t hurt to mention that everything mentioned here expresses my own opinion and my own opinion only. 

How SharePoint has evolved from 100% on-premise to hybrid and cloud

In the early years of SharePoint development, in the early 2000’s, the development on the SharePoint platform was restricted to the server-side code. You could only code what you could put inside the SharePoint box. Of course, the server object model let you do all sort of things, as it is the same model that was used internally by SharePoint (at least a significant portion of it). But, you were responsible for the performance and side-effects of that code as it was run inside the SharePoint very own IIS process or service. As long as you did your homework, the future was bright for you.

For the less prone to scare the IT people by stubbornly insisting on putting custom code in the SharePoint box, there was a second option: the ASMX web services. Clumsy as they were, they were the only option if you couldn’t (or wouldn’t) put your custom code in SharePoint servers.

Over the years, that approach was good and great. Best practices and guidance began to evolve and we saw less and less SharePoint "crime scenes" (as Matthias Einig puts it).

And then, Microsoft began hosting its own product, by launching SharePoint Online inside its BPOS (Business Productivity Suite).

Suddenly, Microsoft began feeling how it was to actually host SharePoint. They began to experience the same nightmares IT admins over the world had to experience when they wanted to patch their SharePoint servers. They weren’t shielded any more from the customers running SharePoint: they were the customer running it and they were responsible for their tenants and their SharePoint data.



SharePoint Online administrator Installing cumulative updates

During the first teething years of SharePoint Online, one thing emerged clear and painfully clear. It was the insight that what worked for a single datacenter doesn’t work for cloud environment, with multiple tenants. The customizations wouldn’t scale and what’s even worse, they would break with the frequent updates to the SharePoint version on Microsoft servers. Even with the stop-gap measure of sandboxed code, the customizations that involve code didn’t behave as Microsoft wanted for their hosting environment.

In other words, during the golden years SharePoint allowed massive customization with server-side code. You could build web parts, web controls, timer jobs, application pages and all sort of server-side extensions. You were allowed to shoot yourself in the foot and bring your SharePoint to a halt, because it was your own datacenter. You (or hopefully your governance committee) would decide what was more important. Also, you could put arbitrary code in your customizations and be happy with it. However, when you depend on your SharePoint servers running smoothly and flawlessly, it is only possible with plain-vanilla SharePoint structures with no customization. As careful as you were, there was always a penalty for your custom code. As for the security of that code, things such as CAS policies were a step in a right direction but many customers just turned their code trust level in web.config to Full and live with it.

How scalability and performance have taken precedence over customization

But, not Microsoft. Microsoft could not allow SharePoint Online to fail. Also, the prospect of selling monthly licenses instead of a one-time license was meant to inject significant cash flow into Microsoft Office Division balance sheets. Between the need of the "less desirable" one-time customers (who wanted custom code) and the need of Microsoft SharePoint Online cash cow (who wanted scalability and performance), the scales tipped to the side of the scalability (pun not intended).

So, things such as client object model and finally  "cloud application model" came to be. Your code was allowed, as long as it run outside SharePoint. The infrastructure was put into place to allow your code to call into SharePoint without the need for explicit credentials. Your calls to SharePoint were carefully securized, throttled and channeled to allow for non-customized SharePoint parts to run without being blocked or delayed.

SharePoint erred on the side of the performance, banishing server code on cloud deployments and slowly banishing (or deemphasizing) even the declarative customization. As long as you are fine with the evolving standard SharePoint Online features, you are good to go. Want more? Plug in an app, even if the client object model didn’t allow for all the richness of the functionality that server object model did. Your application would run on your servers, consuming your resources and leaving their SharePoint instances out of your bad influence.

What’s the future for on-premise SharePoint customers?

But, what Microsoft doesn’t seem to get (or does seem to ignore) is the reality of the corporate customers. SharePoint was never a software for small companies. Yes, you could run a SharePoint Foundation (or Windows SharePoint Services before that) and it was free, but the need for the corporate-scale collaboration and knowledge management that SharePoint ushers is something that goes with the medium and big companies, as it breaks department silos and culture barriers. Also, the hefty price tag for the SharePoint license didn’t help.

Now, SharePoint Online as part of the Office 365 package is affordable. Even smaller companies could take advantage of it. But, are they doing it? In my experience, the vast majority of the Office 365 customers in small companies are more interested in Exchange Online, because running a mail infrastructure is something every company does, small and big, while running SharePoint is something that is not everyone’s piece of cake. Whenever I listen to Microsoft marketing machine trumpeting about "millions of Office 365 users" I can’t help asking myself how many of them are only using Exchange Online out of the whole Office 365 package. (Answer: I don’t know but I imagine that a vast majority).

So, here we have Microsoft cranking out new features to suit their "imaginary" customers that want wizards, flashy UI and shiny settings, while the real SharePoint customers with deep pockets are placed between a rock (declining investment in new on-premise features) and a hard place (the fact that Office 365 deployments customization options are limited at best). They need freedom to customize their SharePoint to suit their needs, and right now the capabilities of the app model and Office 365 app model are just not enough. Not to mention the need for UI customization, which is the first thing that corporate customers want. They now have to jump through hoops just to deploy and keep their master pages, CSS and JS files from breaking with each SharePoint Online update. Not good.

As a side note, the need of the on-premise clients for top-of-the-notch social without hassle was the driver that made Beezy, the company I work for, possible. If that capabilities were available on SharePoint Server, we would be out of business even before starting.

That has been by experience with the SharePoint customers that are more or less "invited" to become Office 365 customers. Only a few of them venture into the hybrid world of connecting their on-premise farm with the Office 365 farm. In the future, this number will increase, but will it ever be so overwhelming as to dispense with the on-premise SharePoint. My opinion is that it won’t. We won’t see the end of on-prem SharePoint in the foreseeable future.


What lies ahead?

Why I so firmly believe this? First, the cloud is not fully parring the on-prem world. I know that SharePoint is just the "expression" of the solution for a business need, but right now going to the cloud makes you lose certain flexibility (in the depth of your customization) and gain another flexibility (on the deployment and operation costs). So, there is still a gap between what cloud enables you to do and what SharePoint Server enables you to do. In the future I believe that the gap will be closing, but not disappearing. Yet, the cloud will allow us to plug-and-play different services and technologies to build our solutions. It will widen our choice of options, not narrow it down.

I think that we will see more and more hybrid deployments but there will always be on-premise deployments with workloads that are only suitable for that scenario. Conversely, there will be (and there are) cloud-only scenarios such as machine-learning powered Delve and Outlook Clutter that can’t be deployed on-premises as it requires some really BIG data (pun intended) to work properly. But, for a big company with their own processes and dynamics, I just can’t see a cloud-only environment, no matter how hard I try to imagine.

However, the love SharePoint Server got from Microsoft during the last decade has gone to the new, cloudy sweet-heart: SharePoint Online. It is a fact.

I would like to imagine the future SharePoint Server (call it vNext or 2015 or whatever) to be more like Windows 8 than Windows XP. Windows XP had to be replaced with Windows Vista and it was a fairly traumatic deployment. But, Windows 8 was "updated" to Windows 8.1 with little or no hassle, and you ended up with very different Windows. In another times, it maybe would be called Windows 9 and would have to be deployed in a "traumatic" fashion. In the same light, I can imagine future SharePoint getting periodic updates (as Windows or Visual Studio do), with new and enhanced features, not only bug fixes. I can imagine that the main development branch is the cloud one, and that not all the features can translate to the on-premise version, but I can’t imagine a reason for not having that kind of quick, iterative development for the cloud that "trickles down" to the SharePoint Server periodically. Gone are the days of 3-year cycle for SharePoint Server, the cycle is now merely couple of weeks long. It is perfect for the cloud, but I think (and hope) that it will also add enhancements to the on-premise customers. Let’s face it, SharePoint is a fairly mature product with a broad set of features. What corporate customer need is not to have the latest eye-candy, "sexy" features but fixing the long-standing pains of basic, staple SharePoint features. I’m not saying that there is no value in the new features, there certainly is. However, there is no excuse for having half-baked functionality any more in SharePoint, online or not.

What’s the future for SharePoint specialists?

In the end, where it leaves us, the SharePoint developers, architects, consultants, Power Users, customizers and so on?

It is normal to feel a certain amount of dizziness and feeling of being let down. Microsoft messaging to the customers is something chronically ambiguous, although the transparency set by some of the teams in Microsoft is exemplary in the last years. They are improving. But still, giving mixed signals or flatly ignoring the rightful questions of the corporate users is making them question whether Microsoft commitment can be taken seriously. (Yes, I mean Silverlight and sandbox solutions and autohosted apps and so on).

Well, let’s face it: the world is changing. It never stopped changing. We can’t cling in the comfort zone anymore, if we ever did. I think that the pace of technological change has accelerated and with it comes a natural reluctance to change (I explain some of it in my Pluralsight course about human behavior). We have to embrace that change.

What does it mean? It doesn’t mean drinking the Microsoft Kool-Aid and firmly believing their marketing messages. They have all the right to send whatever signals they want, but we SharePoint specialists are paid to assess and advise our clients about their technology, not to repeat it like trained parrots. In many cases the message is perfectly valid, but in many cases it is not and we should know better. As I said before, SharePoint (and Office 365) were, are and will be the tools to build solutions for business needs. Not the solutions themselves. Sometimes, we lose track of this simple fact.

It means that we should learn, practice and keep raising the bar. We are on the front line of technology and we should be prepared. Luckily, it has never been more easier to learn new things and find guidance (The OfficeDev PnP team is just the example of that). Dedicate some time to play with the cloud, to experiment with apps, JavaScript and hybrid environments, to fiddle with Delve and Office Graph and to keep pushing the envelope. You will be acquiring new tools to build your technological solutions to business problems. That’s your job as a technology specialist.

What are your opinions? I’m eager to know them.

All images provided by Freeimages.com.

Apps de SharePoint vs Apps de Office 365

En estos últimos días hay un debate calentito sobre el futuro del modelo de apps de SharePoint.

El día 22 de diciembre el gran gurú SharePointero Sahil Malik publicó el siguiente post (SharePoint App Model: Rest in Peace) que abrió un poco la caja de Pandora sobre el modelo de las apps. Su argumento es que el modelo de las apps introducido con SharePoint 2013 está quedando arrinconado a favor del nuevo modelo de apps de Office 365.

Veamos un poco en detalle este debate. Primero voy a presentar a los boxeadores y luego veremos como queda el match.


Los contrincantes en el ring

El modelo de apps de SharePoint 2013

Ya llevamos unos años con este modelo, que todavía se usa bien poco para los proyectos "serios". En esencia, tenemos a SharePoint como proveedor de datos y una aplicación remota (provider-hosted) o JS consumiendo esos datos mediante interfaz REST (o su abstracción JSOM/CSOM). Para la autenticación entre la aplicación remota y SharePoint se usan los tokens OAuth, con la posible intermediación de Azure ACS.

Apps for SharePoint hosting options

El modelo de apps de Office 365

Este nuevo modelo introducido en 2014 es una abstracción por encima de los servicios de la plataforma de Office 365. Los servicios se exponen mediante la interfaz REST y están pensados para proporcionar operaciones de alto nivel como "obtener contactos" o "leer documentos". Las aplicaciones pueden hacerse en cualquier tecnología y para autenticar la aplicación con Office 365 se dispone de Azure Active Directory (AAD) que almacena las credenciales de los usuarios y nos devuelve sus tokens para usarlos contra la API de Office 365.

Development stack for creating solutions that use Office 365 APIs. Select your developer environment and language. Then use Azure single sign-on authentication to connect to the Office 365 APIs.

¿En que se diferencian?

En muchas cosas, pero en mi opinión las diferencias básicas son:

Apps de SharePoint

Apps de Office 365

Operaciones de bajo nivel

Operaciones de alto nivel

Usan Azure ACS

Usan Azure AD

Sólo pueden acceder a SharePoint

Pueden acceder a los servicios de O365, Azure Active Directory y a SharePoint

Permiten el modelo high-trust para instalaciones "on premise"

Sólo se puede usar con la instalación cloud o híbrida

Si queréis ver una comparativa muy bien ilustrada con pantallazos, este post de Chris O’Brien es el mejor recurso.

Entonces, ¿qué hago?

Visto este percal al que Microsoft nos ha llevado, uno se puede preguntar sobre que esperar en el futuro. ¿Desaparecerá el modelo de las apps en la siguiente versión de SharePoint como ya pasó con las aplicaciones sandbox? Si voy a hacer una app para SharePoint, ¿uso el modelo SharePoint o el de Office 365?

Yo creo que el modelo de las apps de SharePoint 2013 no desaparecerá, sino que será uno más a considerar. Ahora tenemos el modelo Full-trust code (FTC) de toda la vida, soportado 100% on-premise, modelo de apps soportado en el cloud y on-premise y en la nueva versión de SharePoint tendremos la opción de usar el modelo de apps de Office 365, si tenemos un despliegue en la nube o híbrido. Lo que no vamos a ver es "un modelo que los gobierne a todos". Al abrirse el abanico de opciones de hospedaje de SharePoint, Microsoft ha abierto el abanico de soluciones de código a medida para encajar mejor en cada uno de ellos.

El modelo de apps de Office 365 es más limpio que el de apps de SharePoint, pero es comprensible dada su naturaleza. Es un modelo abstracto de servicios sobre toda la plataforma Office 365 y no tiene que preocuparse de detalles técnicos como las insufribles páginas AppRegNew.aspx, ClientIds y tonterías varias. Si estamos haciendo una app empresarial que combina la gestión de documentos, listas, contactos, mails etc., el modelo de Office 365 es más directo y fácil.

El modelo de apps de SharePoint lo veo más bien enfocado a las soluciones que sólo involucren a SharePoint o cuando no estamos en el mundo Office 365, ni siquiera en modo híbrido. Igualmente, si sabemos lo que hacemos y quedemos toda la potencia de SharePoint para nosotros en nuestro datacenter, el modelo de código de servidor no va a desaparecer.

Y vosotros, ¿qué opináis?

High-Trust SharePoint Apps, Token Lifetime and MemoryCache

In the last months I have been busy working on a project that includes high-trust on-premises SharePoint 2013 app that is accessed by many people at the same time. Each user is issued an access token that authenticates the user and points to his or her SharePoint site.

The problem that began surfacing is that by default high trust access tokens have a lifetime of only 10 minutes. So, as we cached the token in memory, after 10 minutes SharePoint would start giving 401 Unauthorized errors, due to the token being expired.

Extending the lifetime of the access token

The solution to the problem involved increasing the lifetime of the access token to 60 minutes. This is a fairly simple change in TokenHelper.cs file.

Find the IssueToken private method in TokenHelper.cs and in the "Outer Token" region, change the line that creates the JWT token to this:

JsonWebSecurityToken jsonToken = new JsonWebSecurityToken(
nameid, // outer token issuer should match actor token nameid


In addition, we have added a MemoryCache single instance in-memory application cache (available in NET 4.5). The access tokens are added to the cache with an absolute lifetime of 60 minutes, so they will expire at the same time as their lifetime in SharePoint. Once evicted from the memory cache, the access tokens are recreated with an additional 60 minutes of lifetime and stored in the cache again.

CacheItem item = new CacheItem(key, value);
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60);
policy.RemovedCallback = new CacheEntryRemovedCallback((args) => {
if(args.CacheItem.Value is IDisposable) {
((IDisposable) args.CacheItem.Value).Dispose();
_cache.Set(item, policy);

Bonus: Refactored for reusability and testing

In order to make our code simpler and more understandable, we added the caching capability as a provider, exposed by the ICachingProvider interface. The ICachingProvider interface exposes the operation for getting an object from the cache with a specific key. We wanted the code to be reusable not only for tokens but for all suitable situations.

public interface ICachingProvider
T GetFromCache<T>(string key, Func<T> cacheMissCallback);

The operation GetFromCache is a generic method that allows the user to get a typed object from the cache by providing a string key and a type. Moreover, the operation requires a fallback method. This fallback method is a generic Func<T> that returns an object of type T. The cache implementation then uses the function delegate (a lambda expression, in most cases) if the specified key is not found. By invoking the delegate, we get the object from its source (as if there were no cache) and store it in the cache with the given key.

The full CachingProvider code is displayed here. There are two auxiliary methods to add and retrieve items from the cache. In order to avoid concurrency exceptions, the access to the MemoryCache instance is protected under a lock object called padLock.

public class CachingProvider : ICachingProvider
protected MemoryCache _cache = MemoryCache.Default;
protected static readonly object padLock = new object();

public CachingProvider()


private void AddItem(string key, object value)
lock (padLock)
CacheItem item = new CacheItem(key, value);
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60);
policy.RemovedCallback = new CacheEntryRemovedCallback((args) => {
if(args.CacheItem.Value is IDisposable) {
((IDisposable) args.CacheItem.Value).Dispose();
_cache.Set(item, policy);

private object GetItem(string key)
lock (padLock)
var result = _cache[key];
return result;

public T GetFromCache<T>(string key, Func<T> cacheMissCallback)
var objectFromCache = GetItem(key);
T objectToReturn = default(T);
if (objectFromCache == null)
objectToReturn = cacheMissCallback();
if (objectToReturn != null)
AddItem(key, objectToReturn);
if (objectFromCache is T)
objectToReturn = (T) objectFromCache;
return objectToReturn;

One added bonus of having an interface is that we had two implementations of the interface: CachingProvider (the normal caching service) and DummyCachingProvider (that simply bypassed the cache and returned the result of the Func delegate invocation). In this way we could disable caching by injecting the correct instance of the caching provider and also it benefitted unit testing as we could test both the cached and non-cached code paths.

public class DummyCachingProvider : ICachingProvider

private void AddItem(string key, object value)


private object GetItem(string key)
return null;

public T GetFromCache<T>(string key, Func<T> cacheMissCallback)
return cacheMissCallback();

Business Value of Social Computing (III)

Welcome to the third installment of the business value of social computing post series. You can review the first and the second post of the series, if you haven’t already done so. In this post I will higlight two main problems with the social computing today.

We have seen what social means and what parts make up the mosaic of social computing. We have also seen why do businesses use social and what benefits it can bring. However, businesses still cope with lack of adoption. Why is that? From our experience in deploying Beezy social network for SharePoint to our many customersmany customers, we have learned some key insights on the social success patterns, in the companies that have succeeded in social. We have also learned what doesn’t work, so to avoid it in future deployments.

Here are the two pain points with social computing today:

Social Is Slow to Grow

The enterprise social networks (ESN) are still a young technology. Even if they have been available for years now, they haven’t become mainstream until few years ago. If you compare it to a well-established technology such as OLAP datawarehouses or business process management (BPM), it still has to crystallize into best practice patterns and actionable guidance.

There are several reasons for this slowness in the adoption of social. It is a very disruptive technology, because it changes the way people work and share information. A change of this magnitude needs a lot of time to "stick" (some studies say that it can take 18 months for a change to become durable). In today’s fast-paced world, the instantaneous results we expect of deploying any new technology won’t be there with social. Embracing social in the company is a marathon, not a sprint. It has to be nurtured, encouraged and guided.

Also, the lack of clear guidance and the variety of use cases of social in different sectors and companies don’t help much to speed up the adoption. There are too many contradictory messages and guidelines out there. A company that wants to adopt social computing can’t just expect to plug it in and reap the benefits. It’s a slow-paced progress, but it’s also a steady journey. Have patience!

There Are Common Pitfalls in the Way

The second frequent pain point with ESN has to do with the common pitfalls and barriers to the successful adoption of social at work. Over and over again, companies have faced lack of social adoption and the root causes are very common between all of them.

The first cause of social failure is the lack of an overall strategy for social. As I said before, social computing isn’t a database engine that is simply plugged in. It needs to be deployed in the context of a greater business strategy, in order to achieve business goals. If you deploy first and ask for the business need later, you are putting the cart before the horse. Keep in mind that social is a tool, and not an end in itself. (You can check part 2 of the series for a refresher on the common business cases for social).

The second common cause of lack of adoption is having too many competing priorities. It means that the company is picking many desirable technology projects but doesn’t really commit hard on any one of them. While not putting all your eggs in the same basket is a common sense, not committing is akin to a paper tiger: looks strong from the distance but shaky when confronted. You will need "teeth" (business support and gravitas) to overcome resistance to change, and starting without that support is a recipe for a disaster. If you don’t have executive sponsors for your social endeavour, don’t even start. I have warned you Smile

The third cause is not having a clear business case. As I said few paragraphs before: every business has a unique case for social. You will have to find yours. If it’s lackluster or too broad, your users won’t find any value in the extra hassle that is needed to learn the new technology. It won’t work for them. Make sure to find your value proposition for social in order to start on the right track.


We have seen how social is by its nature slow to grow and how it’s easily derailed by common pitfalls. If you plan your social computing deployment having into account these facts, your chances of success will be higher.


Do you have something to add to this conversation? Make sure to leave a comment below.

Modelo de apps en detalle (III): High-Trust

En los primeros dos posts de esta serie hemos visto la evolución del código personalizado en SharePoint y la arquitectura básica del modelo de apps. También hemos visto a grandes rasgos como la app se autentica con SharePoint utilizando Azure Access Control Service (ACS) como intermediario.

Las apps en SharePoint Online funcionan con este método de autorización de las apps, llamado low-trust.

Modelo Low-Trust: SharePoint + app + ACS

En este modelo, SharePoint es el responsable de pasar un token de contexto a la app, cuando hace la redirección a ella. La app valida el token de contexto y lo intercambia por un token de acceso, haciendo una llamada a ACS. Fijaos que el modelo low-trust deja a la app como un mero transmisor de tokens entre SharePoint y ACS. Por eso mismo se llama de "baja confianza". La app no puede añadir ni quitar nada de la información de autenticación y autorización. Todo esto viene dado en el token inicial que se le pasa a la app desde SharePoint.

Este modelo es muy útil para apps públicas sobre las que no tenemos ningún control. Por ello es el modelo usado para la SharePoint Store y las apps disponibles en SharePoint Online. En los entornos on-premise corporativos, este modelo requiere dar conectividad a Internet a los servidores de SharePoint y los servidores de las apps así como establecer una relación de confianza entre SharePoint on-premise y el servicio ACS de Azure.

Modelo High-Trust: SharePoint + app

En los entornos corporativos on-premise de SharePoint, podemos usar otro modelo de autorización y autenticación de las apps, en el que SharePoint establece una relación de confianza con la app y delega en ella la autenticación del usuario. Este modelo es el llamado high-trust o de alta confianza. También se le llama S2S (Server-to-Server).

No hay que confundir el modelo de high-trust con full-trust. Full-trust es el código personalizado de servidor de SharePoint que desplegamos en la GAC, y por lo que tiene todos los permisos que tiene SharePoint. Una app high-trust tendrá como mucho los permisos que se le han otorgado durante su instalación, ni más ni menos, idéntico al caso de las low-trust apps.

Entonces, ¿por qué se llama "de alta confianza"? Es así debido a que no hay un intermediario (como ACS) y que SharePoint se fía de la app para que le construya el token de acceso adecuado.

Para que el modelo high-trust funcione, tenemos que habilitar una relación de confianza entre la app y SharePoint. Esto se consigue utilizando un certificado digital. La parte pública de este certificado se registra en SharePoint como "trusted token issuer" o emisor de tokens de confianza. En el caso de las low-trust apps, el único "emisor de tokens de confianza" es el ACS. La parte privada del certificado se usará en la app para firmar los tokens emitidios por ella.

Al ser la app un emisor de tokens de confianza, SharePoint aceptará los tokens que ella misma construya. Previamente los validará con la clave pública del certificado, lo que comprueba que no han estados modificados durante el tránsito. Luego, aplicará los permisos que tiene la app en SharePoint (que se le han dado al instalarla) e incluirá la información del usuario actual (que está contenida en el token de acceso que construye la app).

Fijaos que la app puede construir el token de acceso para cualquier usuario. Esta es la parte en la que nos tenemos que "fiar" de la app. Sin embargo, el ámbito de actuación de la app está limitado a los permisos que se le han dado al instalar, así que aunque la app puede "falsear" la información del usuario (como decir que viene de parte de la cuenta del sistema), como mucho puede hacer lo que se le permite a la app, sin poder saltárselo.

Vayamos a ver el "baile" de conversación entre SharePoint y una app high-trust al abrir la app desde el navegador.


  1. El usuario abre una página de SharePoint y clica en el enlace de la app.
  2. SharePoint hace la redirección a la URL de la app (respuesta 302 de HTTP). No incluye ningún token de contexto (como en el caso de las low-trust apps).
  3. El navegador realiza la petición a la URL de MiApp.com como respuesta a la redirección.
  4. La aplicación recibe la petición HTTP. Para acceder a SharePoint mediante CSOM o REST, la aplicación necesita un token de acceso. La aplicación construye el token de acceso y lo firma con la clave privada de su certificado. Incluye este token en la petición a SharePoint.
  5. SharePoint comprueba la validez del token de acceso con la clave pública del certificado de la aplicación y devuelve los datos que la aplicación le ha pedido.
  6. La aplicación renderiza el HTML que se va a visualizar y lo devuelve al navegador.

Construyendo los tokens de acceso

La misma clase auxiliar TokenHelper.cs que usamos en las apps de ASP.NET contiene los métodos para construir los tokens de acceso de high-trust apps. Si miramos el código fuente del mismomiramos el código fuente del mismo, veremos dos métodos:

  • GetS2SAccessTokenWithWindowsIdentity
  • GetS2SClientContextWithWindowsIdentity

El primer método devuelve un token de acceso para un usuario concreto, firmado con la clave privada del certificado de la app. El segundo devuelve un contexto de SharePoint CSOM (ClientContext) inicializado para un usuario concreto.

El usuario se le pasa como parámetro al método, y es una instancia de WindowsIdentity. Internamente, la clase TokenHelper convierte este usuario en un conjunto de claims y construye el token con estos claims. Este detalle de implementación obliga a que la app esté alojada con la autenticación Windows habilitada, ya que sin ella la app no podría obtener una instancia de WindowsIdentity. Sin embargo, es posible cambiar la clase TokenHelper para que expida tokens con un ClaimsIdentity (identidad procedente de claims) si usamos ADFS o algún otro proveedor de identidad (STS) compatible con credenciales SAML. Lo veremos en uno de los próximos posts.

Pero, ¿cómo sabe TokenHelper donde está el certificado para firmar? Bueno, se lo tenemos que decir nosotros usando el fichero web.config. Hay que añadir unas entradas adicionales, a parte de las obligatorias para la app (como el ID de la app y su URI), especificando una ruta de certificado y una contraseña para firmar con la clave privada. Además, el application pool de la app tiene que tener permisos suficientes sobre la carpeta donde se halla el certificado.


Hemos visto como las apps high-trust simplifican la autorización y autenticación en un entorno on-premise. En el próximo post veremos como configurar un entorno de SharePoint on-premise y haremos una app high-trust desde cero.

Modelo de apps en detalle (II): Las piezas

En el primer post de esta serie sobre el modelo de apps de SharePoint 2013, expliqué la problemática de tener código a medida en SharePoint y como SharePoint ha evolucionado para acomodarla, hasta llegar al modelo de apps en SharePoint 2013.

En este segundo post voy a explicar las piezas clave del modelo de apps, para asentar los conocimientos básicos antes de meternos en detalle con la autorización y la autenticación de la app.

Los "webs" del modelo de apps

A grandes rasgos, el modelo de apps involucra tres piezas básicas en forma de sitio web:

  • "host web" (SharePoint)
  • "app web" (SharePoint)
  • "remote web"

Voy a poner un diagrama completo de las piezas de las apps y lo comentaremos a lo largo de este post. Entender bien estas piezas y su relación es tener la mitad del trabajo hecho. Vayamos por partes.


Host Web

Las apps se instalan en un sitio de SharePoint y se lanzan desde allí. Ese sitio web, en el que la app está instalada y tiene ciertos permisos, se llama host web.

App Web

Una app al instalarse puede crear un sitio de SharePoint para su propio uso. Este sitio, si existe, se llama app web y tiene la URL con un dominio diferente al de host web. Esto se hace para impedir llamadas cross-site de JavaScript.

¿Para qué nos sirve un app web? La idea es guardar aquí los datos propios de la app, como los resultados parciales, configuraciones, perfiles de usuarios etc. La app web es invisible para el usuario normal de SharePoint, y no se puede acceder a ella mediante la interfaz de usuario.

Remote Web

Para las aplicaciones provider-hosted (es decir, las que tienen código y no sólo JavaScript), la invocación de la app tiene forma de redirect hacia una URL en la que está alojada la aplicación. Este sitio donde está alojada la app se llama remote web. Lo más probable es que la app esté alojada en un IIS (on-premise o en Azure) pero podría estar alojada en cualquier tecnología web (Apache, Linux, Node.js…). Sin embargo, si usamos ASP.NET como la plataforma para la app, se nos simplifica la comunicación con SharePoint ya que disponemos de librerías y helpers.

Los permisos

Pongamos como ejemplo que acabamos de clicar el enlace de la app en SharePoint, concretamente en http://contoso/site1. . Como ya sabéis, esto implica que la app está instalada en http://contoso/site1, que es la host web de la app.

La app también tiene ciertos permisos sobre la host web, que se le otorgan al instalarla. Los permisos pueden ser de leer algunas listas de la host web hasta tener control total sobre la tenancy o la site collection en la que está la host web. Los permisos de declaran en el paquete de la app (que veremos más adelante) y la persona que instala la app tiene que tener la potestad de otorgarlos.

Resumiendo, la app tiene los permisos sobre la host web que le hayamos dado al instalarla. Ni uno más ni uno menos.

Si la app dispone de una app web (que es otro de los parámetros que podemos poner en el paquete de la app), la app tiene el control total sobre la app web. Puede hacer y deshacer lo que le dé la gana. Como la app web no se muestra al usuario y sirve como un mero repositorio de datos para la app, no hay peligro en darle control total sobre esa parte de SharePoint.

Para nuestro ejemplo, imaginemos que la app web tiene la url http://app123.contosoapps.

La comunicación entre SharePoint y la app

Volvamos al ejemplo. Hemos clicado la app en http://contoso/site1 y nos ha llevado a https://wingtip/app1/start.aspx. Esta URL es parte del paquete de la app, y es la que lleva a la remote web de la app. En este caso, imaginemos que https://wingtip/app1 es un sitio IIS en una máquina dentro del datacenter de Contoso. Una máquina que no tiene ni idea de SharePoint y que contiene una app hecha en ASP.NET WebForms.

ASP.NET carga la página start.aspx. En este momento la app tiene que establecer la conexión con SharePoint para cargar los datos que necesita de allí. Veamos como lo hace.

Lo primero que necesita la app en remote web es saber a qué URLs tiene que llamar para obtener los datos de la host web y la app web.

¿Cómo lo sabe? Podríamos guardar estas URLs en la configuración de la app (en web.config o en la base de datos) pero lo más habitual es que al llamar la app desde SharePoint se incluyan estas URLs en la URL con la que se llama a la app. Así, nuestra app no se llama con un redirect a https://wingtip/app1/start.aspx sino a https://wingtip/app1/start.aspx?SPHostUrl=http://contoso/site1&SPAppWebUrl=http://app123.contosoapps. Como podéis ver, en los parámetros SPHostUrl y SPAppWebUrl tenemos las URLs de la host web y de la app web, respectivamente.


Usando el modelo de objetos de cliente de SharePoint podríamos instanciar un contexto a través de esas URLs, pero nos faltaría la autenticación. Es decir, ¿con qué credenciales de usuario abrimos el contexto de SharePoint? Vamos a ver como lo resuelve el modelo de apps.

Autenticación de la app

En el primer post de esta seria dije que las apps de SharePoint 2013 tienen identidad propia y se les pueden asignar permisos. Pues este es el mecanismo con el que se resuelve el problema de las credenciales: el contexto de SharePoint se abrirá con las credenciales de la app.

¿De dónde vamos a sacar las credenciales de la app? Esto depende de si estamos usando las apps en SharePoint Online o una granja on-premise federada con Azure Access Control Service (ACS) o bien estamos usando las apps en un entorno on-premise con certificados.

El primer caso es el más frecuente y se llama "low-trust" (baja confianza). En este caso, SharePoint usa un servicio de Azure llamado ACS (Access Control Service) para confimar la identidad de la app. ¿Cómo? Pues sencillamente antes de hacer el redirect a la app, inyecta en la cabecera de la petición HTTP un pequeño fragmento de texto, un token de contexto, que le servirá a la app luego para sacar las credenciales.


Este token de contexto contiene la información sobre la identidad de la app. La app (ASP.NET) tiene que extraer este token de la petición inicial HTTP, hacer una petición a ACS e intercambiarlo por otro token, llamado token de acceso. El token de acceso nos sirve para instanciar un contexto de SharePoint, ya que es un token OAuth que SharePoint 2013 acepta. Este token se incorpora en la cabecera de la petición a la interfaz REST de SharePoint 2013 con el nombre de "Bearer".


Podéis ver el contenido exacto de los tokens de contexto y acceso en el siguiente enlace: http://msdn.microsoft.com/library/office/fp179932.aspx#Tokens.

Todo este proceso de extraer los tokens, validarlos, intercambiarlos y crear el contexto de SharePoint está encapsulado en una clase autogenerada por Visual Studio al hacer una app de SharePoint, que se llama TokenHelper.cs. Podéis examinarlo y ver como hace la petición a ACS y que luego incorpora el token de acceso para llamar a SharePoint.

Si estamos en una plataforma que no es .NET (como PHP por ejemplo), podemos hacer el mismo proceso pero no tendremos un TokenHelper mágico. Tendremos que hacer la extracción y construcción de tokens y de llamadas a ACS de manera manual, pero no hay nada intrínsecamente difícil en ello.


Como podéis ver, el "baile" entre las piezas de las apps es delicado y hay muchos conceptos involucrados. Os recomiendo releer este post hasta que os quede claro la interacción entre host web, app web y remote web, así como el papel de los tokens. Si queréis más detalles sobre los tokens y la autenticación low-trust, os recomiendo el artículo siguiente de Kirk Evans del equipo de SharePoint (en inglés): http://blogs.msdn.com/b/kaevans/archive/2013/04/05/inside-sharepoint-2013-oauth-context-tokens.aspx.

En el siguiente post veremos como las aplicaciones "high-trust" resuelven el problema de la autenticación sin delegarlo a un servicio de terceros como ACS.

Modelo de apps en detalle (I): Introducción

Hola a todos,

En un reciente proyecto me he visto involucrado en mucho detalle en el modelo de apps de SharePoint 2013 en entornos corporativos, con aplicaciones de alta confianza (high-trust) y autenticación federada con Claims. Mi idea en esta serie de artículos en el blog es compartir estos conocimientos ya que creo que existe poca información de primera mano sobre la utilización real del modelo de apps, fuera de las apps de demostración.

En este primer post voy a recordar la evolución del modelo de programación de SharePoint.

Antes de SharePoint 2010

Para un programador "veterano" en SharePoint, la programación en SharePoint siempre implica poner el código NET en el servidor de SharePoint. Desde que SharePoint abrazó el modelo de programación NET en la versión 2003, siempre ha sido así. El código de nuestras soluciones SharePoint se pone en la carpeta BIN de la aplicación web de SharePoint o bien en la GAC del servidor.


Los beneficios de este enfoque son el acceso a toda la potencia del modelo de objetos de SharePoint de servidor (Server Object Model) y de NET. Sin embargo, nos exponemos a que nuestro código ralentize el servidor (ya que se ejecuta en el mismo proceso que el código propio de SharePoint) y a que tengamos que preparar bien las actualizaciones de SharePoint. Solo hay que recordar las migraciones de una versión de SharePoint a otra y la caza y captura de código a medida para actualizarlo al nuevo modelo.

La raíz del problema es que SharePoint "puro" sin código a medida tiene un rendimiento muy bueno y que SharePoint con código a medida que no esté bien optimizado (y la verdad es que la mayoría del código a medida en SharePoint que circula por ahí no lo está) puede castigar el rendimiento de tal manera que no sea usable. En esencia: no hay nada que nos impida "frenar" a SharePoint con nuestro código lento. (como ejercicio podéis poner una webpart en SharePoint que haga un Thread.Sleep(5000) y SharePoint se volverá 5 segundos más lento).

SharePoint 2010: CSOM y Sandboxed

Para solventar de alguna manera este acoplamiento fuerte entre SharePoint y nuestro código, en SharePoint 2010 aparecen dos componentes arquitectónicos nuevos.

En primer lugar, tenemos un modelo de objetos de cliente (Client-side Object Model, CSOM) disponible en NET, Silverlight y JavaScript. De esta manera podemos crear código que se ejecute fuera de SharePoint. El modelo de objetos de cliente no es tan potente como el modelo de objetos de servidor, pero es usable en muchos escenarios comunes.

En segundo lugar, aparece el concepto de código sandboxed. Por primera vez, el código a medida se puede ejecutar en un proceso separado de SharePoint y por tanto sometido a restricciones de tiempo de CPU y memoria. De esta manera podemos mantener estable el entorno propio de SharePoint y protegerlo de cierta manera de nuestro código potencialmente acaparador de recursos.


Las aplicaciones sandboxed prometían lo mejor de los dos mundos: acceso al modelo de objetos de servidor y a la vez un control sobre el uso de recursos del código que lo utiliza.

SharePoint 2013: Cloud-based Apps (CBA)

¡Y llegó SharePoint 2013! De buenas a primeras, se cargó de un plumazo "deprecador" el modelo sandbox que tanto prometía. Para sustituirlo, optó por desterrar el código a medida fuera del servidor de SharePoint en el nuevo modelo de apps (cloud-based applications, CBA).

Por tanto, el modelo de desarrollo preferente en SharePoint 2013 es tener todo el código fuera de SharePoint (en apps) y puesto en otro proceso que potencialmente esté en otra máquina así como utilizar el modelo de objetos de cliente (ya introducido en SharePoint 2010) para interactuar con los datos en SharePoint. De esta manera se cumple el objetivo importante de no que el código a medida lento no haga lento a SharePoint, pero a costa de tener que utilizar una API de cliente incompleta y que no está a la par del modelo de objetos de servidor.


Para que el código a medida de las apps pueda llamar a SharePoint sin tener en cuenta el usuario concreto, se adaptó el estándar OAuth para crear las credenciales de las apps. En SharePoint 2013 una app tiene una identidad propia y permisos propios. En tiempo de ejecución, se combinan los permisos de la app y los permisos del usuario que utiliza la app para determinar los permisos efectivos.

Cuando apareció el modelo de las apps, había 3 "sabores" de las mismas: SharePoint-hosted, auto-hosted y provider-hosted. En junio de 2014 desapareció la opción auto-hosted. Las apps SharePoint-hosted sólo pueden contener código JavaScript así que para las aplicaciones corporativas son más bien inútiles. Esto nos deja con el modelo provider-hosted como el único realmente utilizado para hacer una app de cierta relevancia en SharePoint 2013.


En este post hemos revisado la evolución de los modelos de desarrollo a medida de SharePoint, hasta el modelo de apps de SharePoint 2013. En el post siguiente abordaré el modelo de las apps en detalle para explicar las piezas que lo componen e ir introduciendo el concepto de low-trust y high-trust apps. ¡Hasta pronto!

Checking for User Permissions and Getting UnauthorizedAccessException

In a recent project I have been writing code to check if an arbitrary user can create new documents in certain document libraries. In order to do the check, I used the good old DoesUserHavePermissions method, which is present in SPWeb, SPList and SPListItem objects (securable objects).


When using DoesUserHavePermissions() method on a securable object, you get UnauthorizedAccessException.


There are multiple causes for this behavior.

FIrst, the current user context is such that the current user has no rights to enumerate permissions on the SPWeb/SPList/SPListItem object. If so, the exception will be raised.

So, your first inclination is to use RunWithElevatedPrivileges to check the permissions. However, it also throws the same exception. The cause is a token check that the DoesUserHavePermissions method includes in its code (as explained by Phil Harding). The user token is compared against the current user. Somehow, the user token for elevated object is not the same as the current user in the context and the exception is being thrown.


I managed to solve this issue by explicitly opening the securable object with a System Account token, instead of using RunWithElevatedPrivileges.

SPSite site = // get your normal reference for the SPSite/SPWeb/SPLIstItem object;
SPSite elevSite = new SPSite(site.ID, SPContext.Current.Site.SystemAccount.UserToken);
bool hasPermissions = elevSite.DoesUserHavePermissions(arbitraryUser, arbitraryPermission);

Access Denied with RunWithElevatedPrivileges

A strange situation happened to me few days ago, when checking a portion of SharePoint 2013 server-side code on a custom form. Basically, it uses RunWithElevatedPrivileges to check that the current user has access to a certain site and certain libraries, before uploading the file to a content organizer enabled library.

The Symptoms

The code that runs with elevated privileges on a POST event triggered "Access Denied" errors when trying to access SPWeb and SPList objects. The objects were declared under the elevated privilege code block but the ULS logs still show the "access denied" errors.

The Cause

According to MSDN blog the code running with elevated permissions has to validate the form digest before entering the elevated permissions code block. If not, it might give "Access Denied" errors.

The Solution

Just add SPUtility.ValidateFormDigest(); before the elevated permissions block and the "Access Denied" errors dissappear.